00:00:00.000 Started by upstream project "autotest-per-patch" build number 132297 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.013 The recommended git tool is: git 00:00:00.014 using credential 00000000-0000-0000-0000-000000000002 00:00:00.016 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.039 Fetching changes from the remote Git repository 00:00:00.041 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.089 Using shallow fetch with depth 1 00:00:00.089 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.089 > git --version # timeout=10 00:00:00.148 > git --version # 'git version 2.39.2' 00:00:00.148 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.208 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.208 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.428 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.440 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.451 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.451 > git config core.sparsecheckout # timeout=10 00:00:04.462 > git read-tree -mu HEAD # timeout=10 00:00:04.478 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.495 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.495 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.601 [Pipeline] Start of Pipeline 00:00:04.614 [Pipeline] library 00:00:04.616 Loading library shm_lib@master 00:00:04.617 Library shm_lib@master is cached. Copying from home. 00:00:04.633 [Pipeline] node 00:00:19.635 Still waiting to schedule task 00:00:19.635 Waiting for next available executor on ‘vagrant-vm-host’ 00:01:24.246 Running on VM-host-SM9 in /var/jenkins/workspace/raid-vg-autotest 00:01:24.254 [Pipeline] { 00:01:24.265 [Pipeline] catchError 00:01:24.268 [Pipeline] { 00:01:24.312 [Pipeline] wrap 00:01:24.319 [Pipeline] { 00:01:24.325 [Pipeline] stage 00:01:24.326 [Pipeline] { (Prologue) 00:01:24.338 [Pipeline] echo 00:01:24.339 Node: VM-host-SM9 00:01:24.343 [Pipeline] cleanWs 00:01:24.350 [WS-CLEANUP] Deleting project workspace... 00:01:24.350 [WS-CLEANUP] Deferred wipeout is used... 00:01:24.354 [WS-CLEANUP] done 00:01:24.538 [Pipeline] setCustomBuildProperty 00:01:24.616 [Pipeline] httpRequest 00:01:25.041 [Pipeline] echo 00:01:25.043 Sorcerer 10.211.164.101 is alive 00:01:25.055 [Pipeline] retry 00:01:25.058 [Pipeline] { 00:01:25.074 [Pipeline] httpRequest 00:01:25.078 HttpMethod: GET 00:01:25.079 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:01:25.079 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:01:25.084 Response Code: HTTP/1.1 200 OK 00:01:25.085 Success: Status code 200 is in the accepted range: 200,404 00:01:25.086 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:01:33.871 [Pipeline] } 00:01:33.888 [Pipeline] // retry 00:01:33.896 [Pipeline] sh 00:01:34.177 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:01:34.192 [Pipeline] httpRequest 00:01:34.634 [Pipeline] echo 00:01:34.636 Sorcerer 10.211.164.101 is alive 00:01:34.647 [Pipeline] retry 00:01:34.650 [Pipeline] { 00:01:34.666 [Pipeline] httpRequest 00:01:34.670 HttpMethod: GET 00:01:34.671 URL: http://10.211.164.101/packages/spdk_59da1a1d7cf90d41a5ba5d4a44aa51af982a349b.tar.gz 00:01:34.675 Sending request to url: http://10.211.164.101/packages/spdk_59da1a1d7cf90d41a5ba5d4a44aa51af982a349b.tar.gz 00:01:34.681 Response Code: HTTP/1.1 200 OK 00:01:34.682 Success: Status code 200 is in the accepted range: 200,404 00:01:34.683 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_59da1a1d7cf90d41a5ba5d4a44aa51af982a349b.tar.gz 00:02:05.835 [Pipeline] } 00:02:05.851 [Pipeline] // retry 00:02:05.933 [Pipeline] sh 00:02:06.214 + tar --no-same-owner -xf spdk_59da1a1d7cf90d41a5ba5d4a44aa51af982a349b.tar.gz 00:02:09.509 [Pipeline] sh 00:02:09.789 + git -C spdk log --oneline -n5 00:02:09.789 59da1a1d7 nvmf: Expose DIF type of namespace to host again 00:02:09.789 9a34ab7f7 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:02:09.789 b0a35519c nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:02:09.789 dec6d3843 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:02:09.789 4b2d483c6 dif: Add spdk_dif_pi_format_get_pi_size() to use for NVMe PRACT 00:02:09.808 [Pipeline] writeFile 00:02:09.823 [Pipeline] sh 00:02:10.105 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:10.117 [Pipeline] sh 00:02:10.396 + cat autorun-spdk.conf 00:02:10.396 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.396 SPDK_RUN_ASAN=1 00:02:10.396 SPDK_RUN_UBSAN=1 00:02:10.396 SPDK_TEST_RAID=1 00:02:10.396 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.402 RUN_NIGHTLY=0 00:02:10.404 [Pipeline] } 00:02:10.418 [Pipeline] // stage 00:02:10.433 [Pipeline] stage 00:02:10.435 [Pipeline] { (Run VM) 00:02:10.448 [Pipeline] sh 00:02:10.727 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:10.727 + echo 'Start stage prepare_nvme.sh' 00:02:10.727 Start stage prepare_nvme.sh 00:02:10.727 + [[ -n 5 ]] 00:02:10.727 + disk_prefix=ex5 00:02:10.727 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:02:10.727 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:02:10.727 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:02:10.727 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.727 ++ SPDK_RUN_ASAN=1 00:02:10.727 ++ SPDK_RUN_UBSAN=1 00:02:10.727 ++ SPDK_TEST_RAID=1 00:02:10.727 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.727 ++ RUN_NIGHTLY=0 00:02:10.727 + cd /var/jenkins/workspace/raid-vg-autotest 00:02:10.727 + nvme_files=() 00:02:10.727 + declare -A nvme_files 00:02:10.727 + backend_dir=/var/lib/libvirt/images/backends 00:02:10.727 + nvme_files['nvme.img']=5G 00:02:10.727 + nvme_files['nvme-cmb.img']=5G 00:02:10.727 + nvme_files['nvme-multi0.img']=4G 00:02:10.727 + nvme_files['nvme-multi1.img']=4G 00:02:10.727 + nvme_files['nvme-multi2.img']=4G 00:02:10.727 + nvme_files['nvme-openstack.img']=8G 00:02:10.727 + nvme_files['nvme-zns.img']=5G 00:02:10.727 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:10.727 + (( SPDK_TEST_FTL == 1 )) 00:02:10.727 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:10.727 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:10.727 + for nvme in "${!nvme_files[@]}" 00:02:10.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:02:10.727 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.727 + for nvme in "${!nvme_files[@]}" 00:02:10.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:02:10.727 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:10.727 + for nvme in "${!nvme_files[@]}" 00:02:10.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:02:10.986 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:10.986 + for nvme in "${!nvme_files[@]}" 00:02:10.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:02:10.986 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:10.986 + for nvme in "${!nvme_files[@]}" 00:02:10.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:02:10.986 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.986 + for nvme in "${!nvme_files[@]}" 00:02:10.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:02:10.986 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.986 + for nvme in "${!nvme_files[@]}" 00:02:10.986 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:02:11.244 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.244 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:02:11.244 + echo 'End stage prepare_nvme.sh' 00:02:11.244 End stage prepare_nvme.sh 00:02:11.255 [Pipeline] sh 00:02:11.534 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:11.534 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:02:11.534 00:02:11.534 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:02:11.534 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:02:11.534 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:02:11.534 HELP=0 00:02:11.534 DRY_RUN=0 00:02:11.534 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:02:11.534 NVME_DISKS_TYPE=nvme,nvme, 00:02:11.534 NVME_AUTO_CREATE=0 00:02:11.534 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:02:11.534 NVME_CMB=,, 00:02:11.534 NVME_PMR=,, 00:02:11.534 NVME_ZNS=,, 00:02:11.534 NVME_MS=,, 00:02:11.534 NVME_FDP=,, 00:02:11.534 SPDK_VAGRANT_DISTRO=fedora39 00:02:11.534 SPDK_VAGRANT_VMCPU=10 00:02:11.534 SPDK_VAGRANT_VMRAM=12288 00:02:11.534 SPDK_VAGRANT_PROVIDER=libvirt 00:02:11.534 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:11.534 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:11.534 SPDK_OPENSTACK_NETWORK=0 00:02:11.534 VAGRANT_PACKAGE_BOX=0 00:02:11.534 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:11.534 FORCE_DISTRO=true 00:02:11.534 VAGRANT_BOX_VERSION= 00:02:11.534 EXTRA_VAGRANTFILES= 00:02:11.534 NIC_MODEL=e1000 00:02:11.534 00:02:11.534 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:02:11.534 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:02:14.817 Bringing machine 'default' up with 'libvirt' provider... 00:02:15.754 ==> default: Creating image (snapshot of base box volume). 00:02:16.013 ==> default: Creating domain with the following settings... 00:02:16.013 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731666645_39fb6acf9e733ea631a5 00:02:16.013 ==> default: -- Domain type: kvm 00:02:16.013 ==> default: -- Cpus: 10 00:02:16.013 ==> default: -- Feature: acpi 00:02:16.013 ==> default: -- Feature: apic 00:02:16.013 ==> default: -- Feature: pae 00:02:16.013 ==> default: -- Memory: 12288M 00:02:16.013 ==> default: -- Memory Backing: hugepages: 00:02:16.013 ==> default: -- Management MAC: 00:02:16.013 ==> default: -- Loader: 00:02:16.013 ==> default: -- Nvram: 00:02:16.013 ==> default: -- Base box: spdk/fedora39 00:02:16.013 ==> default: -- Storage pool: default 00:02:16.013 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731666645_39fb6acf9e733ea631a5.img (20G) 00:02:16.013 ==> default: -- Volume Cache: default 00:02:16.013 ==> default: -- Kernel: 00:02:16.013 ==> default: -- Initrd: 00:02:16.013 ==> default: -- Graphics Type: vnc 00:02:16.013 ==> default: -- Graphics Port: -1 00:02:16.013 ==> default: -- Graphics IP: 127.0.0.1 00:02:16.013 ==> default: -- Graphics Password: Not defined 00:02:16.013 ==> default: -- Video Type: cirrus 00:02:16.013 ==> default: -- Video VRAM: 9216 00:02:16.013 ==> default: -- Sound Type: 00:02:16.013 ==> default: -- Keymap: en-us 00:02:16.013 ==> default: -- TPM Path: 00:02:16.013 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:16.013 ==> default: -- Command line args: 00:02:16.013 ==> default: -> value=-device, 00:02:16.013 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:16.013 ==> default: -> value=-drive, 00:02:16.013 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:02:16.013 ==> default: -> value=-device, 00:02:16.013 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.013 ==> default: -> value=-device, 00:02:16.013 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:16.013 ==> default: -> value=-drive, 00:02:16.013 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:16.013 ==> default: -> value=-device, 00:02:16.013 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.013 ==> default: -> value=-drive, 00:02:16.013 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:16.013 ==> default: -> value=-device, 00:02:16.013 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.013 ==> default: -> value=-drive, 00:02:16.013 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:16.013 ==> default: -> value=-device, 00:02:16.013 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.013 ==> default: Creating shared folders metadata... 00:02:16.013 ==> default: Starting domain. 00:02:17.919 ==> default: Waiting for domain to get an IP address... 00:02:35.996 ==> default: Waiting for SSH to become available... 00:02:36.930 ==> default: Configuring and enabling network interfaces... 00:02:41.115 default: SSH address: 192.168.121.35:22 00:02:41.115 default: SSH username: vagrant 00:02:41.115 default: SSH auth method: private key 00:02:43.015 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:51.133 ==> default: Mounting SSHFS shared folder... 00:02:52.067 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:52.067 ==> default: Checking Mount.. 00:02:53.002 ==> default: Folder Successfully Mounted! 00:02:53.002 ==> default: Running provisioner: file... 00:02:53.935 default: ~/.gitconfig => .gitconfig 00:02:54.193 00:02:54.193 SUCCESS! 00:02:54.193 00:02:54.193 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:54.193 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:54.193 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:54.193 00:02:54.201 [Pipeline] } 00:02:54.215 [Pipeline] // stage 00:02:54.224 [Pipeline] dir 00:02:54.224 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:54.226 [Pipeline] { 00:02:54.237 [Pipeline] catchError 00:02:54.239 [Pipeline] { 00:02:54.250 [Pipeline] sh 00:02:54.527 + vagrant ssh-config --host vagrant 00:02:54.527 + sed -ne /^Host/,$p 00:02:54.527 + tee ssh_conf 00:02:58.713 Host vagrant 00:02:58.713 HostName 192.168.121.35 00:02:58.713 User vagrant 00:02:58.713 Port 22 00:02:58.713 UserKnownHostsFile /dev/null 00:02:58.713 StrictHostKeyChecking no 00:02:58.713 PasswordAuthentication no 00:02:58.713 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:58.713 IdentitiesOnly yes 00:02:58.713 LogLevel FATAL 00:02:58.713 ForwardAgent yes 00:02:58.713 ForwardX11 yes 00:02:58.713 00:02:58.728 [Pipeline] withEnv 00:02:58.730 [Pipeline] { 00:02:58.744 [Pipeline] sh 00:02:59.023 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:59.023 source /etc/os-release 00:02:59.023 [[ -e /image.version ]] && img=$(< /image.version) 00:02:59.023 # Minimal, systemd-like check. 00:02:59.023 if [[ -e /.dockerenv ]]; then 00:02:59.023 # Clear garbage from the node's name: 00:02:59.023 # agt-er_autotest_547-896 -> autotest_547-896 00:02:59.023 # $HOSTNAME is the actual container id 00:02:59.023 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:59.023 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:59.023 # We can assume this is a mount from a host where container is running, 00:02:59.023 # so fetch its hostname to easily identify the target swarm worker. 00:02:59.023 container="$(< /etc/hostname) ($agent)" 00:02:59.023 else 00:02:59.023 # Fallback 00:02:59.023 container=$agent 00:02:59.023 fi 00:02:59.023 fi 00:02:59.023 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:59.023 00:02:59.034 [Pipeline] } 00:02:59.049 [Pipeline] // withEnv 00:02:59.057 [Pipeline] setCustomBuildProperty 00:02:59.071 [Pipeline] stage 00:02:59.073 [Pipeline] { (Tests) 00:02:59.089 [Pipeline] sh 00:02:59.501 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:59.771 [Pipeline] sh 00:03:00.049 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:00.062 [Pipeline] timeout 00:03:00.062 Timeout set to expire in 1 hr 30 min 00:03:00.064 [Pipeline] { 00:03:00.076 [Pipeline] sh 00:03:00.353 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:00.920 HEAD is now at 59da1a1d7 nvmf: Expose DIF type of namespace to host again 00:03:00.932 [Pipeline] sh 00:03:01.212 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:01.484 [Pipeline] sh 00:03:01.762 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:02.037 [Pipeline] sh 00:03:02.316 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:03:02.575 ++ readlink -f spdk_repo 00:03:02.575 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:02.575 + [[ -n /home/vagrant/spdk_repo ]] 00:03:02.575 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:02.575 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:02.575 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:02.575 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:02.575 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:02.575 + [[ raid-vg-autotest == pkgdep-* ]] 00:03:02.575 + cd /home/vagrant/spdk_repo 00:03:02.575 + source /etc/os-release 00:03:02.575 ++ NAME='Fedora Linux' 00:03:02.575 ++ VERSION='39 (Cloud Edition)' 00:03:02.575 ++ ID=fedora 00:03:02.575 ++ VERSION_ID=39 00:03:02.575 ++ VERSION_CODENAME= 00:03:02.575 ++ PLATFORM_ID=platform:f39 00:03:02.575 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:02.575 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:02.575 ++ LOGO=fedora-logo-icon 00:03:02.575 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:02.575 ++ HOME_URL=https://fedoraproject.org/ 00:03:02.575 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:02.575 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:02.575 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:02.575 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:02.575 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:02.575 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:02.575 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:02.575 ++ SUPPORT_END=2024-11-12 00:03:02.575 ++ VARIANT='Cloud Edition' 00:03:02.575 ++ VARIANT_ID=cloud 00:03:02.575 + uname -a 00:03:02.575 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:02.575 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:02.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:02.838 Hugepages 00:03:02.838 node hugesize free / total 00:03:02.838 node0 1048576kB 0 / 0 00:03:02.838 node0 2048kB 0 / 0 00:03:02.838 00:03:02.838 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:03.096 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:03.096 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:03.096 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:03.096 + rm -f /tmp/spdk-ld-path 00:03:03.096 + source autorun-spdk.conf 00:03:03.096 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.096 ++ SPDK_RUN_ASAN=1 00:03:03.096 ++ SPDK_RUN_UBSAN=1 00:03:03.096 ++ SPDK_TEST_RAID=1 00:03:03.096 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.096 ++ RUN_NIGHTLY=0 00:03:03.096 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:03.096 + [[ -n '' ]] 00:03:03.096 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:03.096 + for M in /var/spdk/build-*-manifest.txt 00:03:03.096 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:03.096 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.096 + for M in /var/spdk/build-*-manifest.txt 00:03:03.096 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:03.096 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.096 + for M in /var/spdk/build-*-manifest.txt 00:03:03.096 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:03.096 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.096 ++ uname 00:03:03.096 + [[ Linux == \L\i\n\u\x ]] 00:03:03.096 + sudo dmesg -T 00:03:03.096 + sudo dmesg --clear 00:03:03.096 + dmesg_pid=5255 00:03:03.096 + sudo dmesg -Tw 00:03:03.096 + [[ Fedora Linux == FreeBSD ]] 00:03:03.096 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:03.096 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:03.096 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:03.096 + [[ -x /usr/src/fio-static/fio ]] 00:03:03.096 + export FIO_BIN=/usr/src/fio-static/fio 00:03:03.096 + FIO_BIN=/usr/src/fio-static/fio 00:03:03.096 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:03.096 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:03.096 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:03.096 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:03.096 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:03.096 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:03.096 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:03.096 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:03.096 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.096 10:31:33 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:03.096 10:31:33 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.096 10:31:33 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.096 10:31:33 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:03:03.096 10:31:33 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:03:03.096 10:31:33 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:03:03.096 10:31:33 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.096 10:31:33 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:03:03.096 10:31:33 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:03.096 10:31:33 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.355 10:31:33 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:03.355 10:31:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:03.355 10:31:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:03.355 10:31:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:03.355 10:31:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:03.355 10:31:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:03.355 10:31:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.355 10:31:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.355 10:31:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.355 10:31:33 -- paths/export.sh@5 -- $ export PATH 00:03:03.355 10:31:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.355 10:31:33 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:03.355 10:31:33 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:03.355 10:31:33 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731666693.XXXXXX 00:03:03.355 10:31:33 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731666693.5rldfZ 00:03:03.355 10:31:33 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:03.355 10:31:33 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:03.355 10:31:33 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:03.355 10:31:33 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:03.355 10:31:33 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:03.355 10:31:33 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:03.355 10:31:33 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:03.355 10:31:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.355 10:31:33 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:03:03.355 10:31:33 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:03.355 10:31:33 -- pm/common@17 -- $ local monitor 00:03:03.355 10:31:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.355 10:31:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.355 10:31:33 -- pm/common@25 -- $ sleep 1 00:03:03.355 10:31:33 -- pm/common@21 -- $ date +%s 00:03:03.355 10:31:33 -- pm/common@21 -- $ date +%s 00:03:03.355 10:31:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731666693 00:03:03.355 10:31:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731666693 00:03:03.355 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731666693_collect-cpu-load.pm.log 00:03:03.355 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731666693_collect-vmstat.pm.log 00:03:04.290 10:31:34 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:04.290 10:31:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:04.290 10:31:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:04.290 10:31:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:04.290 10:31:34 -- spdk/autobuild.sh@16 -- $ date -u 00:03:04.290 Fri Nov 15 10:31:34 AM UTC 2024 00:03:04.290 10:31:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:04.290 v25.01-pre-214-g59da1a1d7 00:03:04.290 10:31:34 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:04.290 10:31:34 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:04.290 10:31:34 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:04.290 10:31:34 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:04.290 10:31:34 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.290 ************************************ 00:03:04.290 START TEST asan 00:03:04.290 ************************************ 00:03:04.290 using asan 00:03:04.290 10:31:34 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:03:04.290 00:03:04.290 real 0m0.000s 00:03:04.290 user 0m0.000s 00:03:04.290 sys 0m0.000s 00:03:04.290 10:31:34 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:04.290 ************************************ 00:03:04.290 END TEST asan 00:03:04.290 ************************************ 00:03:04.290 10:31:34 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:04.290 10:31:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:04.290 10:31:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:04.290 10:31:34 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:04.290 10:31:34 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:04.290 10:31:34 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.290 ************************************ 00:03:04.290 START TEST ubsan 00:03:04.290 ************************************ 00:03:04.290 using ubsan 00:03:04.290 10:31:34 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:03:04.290 00:03:04.290 real 0m0.000s 00:03:04.290 user 0m0.000s 00:03:04.290 sys 0m0.000s 00:03:04.290 10:31:34 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:04.290 10:31:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:04.290 ************************************ 00:03:04.290 END TEST ubsan 00:03:04.290 ************************************ 00:03:04.549 10:31:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:04.549 10:31:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:04.549 10:31:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:04.549 10:31:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:04.549 10:31:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:04.549 10:31:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:04.549 10:31:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:04.549 10:31:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:04.549 10:31:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:03:04.549 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:04.549 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:05.115 Using 'verbs' RDMA provider 00:03:18.262 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:30.500 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:30.500 Creating mk/config.mk...done. 00:03:30.500 Creating mk/cc.flags.mk...done. 00:03:30.500 Type 'make' to build. 00:03:30.500 10:32:00 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:30.500 10:32:00 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:30.500 10:32:00 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:30.500 10:32:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:30.500 ************************************ 00:03:30.500 START TEST make 00:03:30.500 ************************************ 00:03:30.500 10:32:01 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:31.067 make[1]: Nothing to be done for 'all'. 00:03:49.191 The Meson build system 00:03:49.191 Version: 1.5.0 00:03:49.191 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:49.191 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:49.191 Build type: native build 00:03:49.191 Program cat found: YES (/usr/bin/cat) 00:03:49.191 Project name: DPDK 00:03:49.191 Project version: 24.03.0 00:03:49.191 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:49.191 C linker for the host machine: cc ld.bfd 2.40-14 00:03:49.191 Host machine cpu family: x86_64 00:03:49.191 Host machine cpu: x86_64 00:03:49.191 Message: ## Building in Developer Mode ## 00:03:49.191 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:49.191 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:49.191 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:49.191 Program python3 found: YES (/usr/bin/python3) 00:03:49.191 Program cat found: YES (/usr/bin/cat) 00:03:49.191 Compiler for C supports arguments -march=native: YES 00:03:49.191 Checking for size of "void *" : 8 00:03:49.191 Checking for size of "void *" : 8 (cached) 00:03:49.191 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:49.191 Library m found: YES 00:03:49.191 Library numa found: YES 00:03:49.191 Has header "numaif.h" : YES 00:03:49.191 Library fdt found: NO 00:03:49.191 Library execinfo found: NO 00:03:49.191 Has header "execinfo.h" : YES 00:03:49.191 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:49.191 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:49.191 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:49.191 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:49.191 Run-time dependency openssl found: YES 3.1.1 00:03:49.191 Run-time dependency libpcap found: YES 1.10.4 00:03:49.191 Has header "pcap.h" with dependency libpcap: YES 00:03:49.191 Compiler for C supports arguments -Wcast-qual: YES 00:03:49.191 Compiler for C supports arguments -Wdeprecated: YES 00:03:49.191 Compiler for C supports arguments -Wformat: YES 00:03:49.191 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:49.191 Compiler for C supports arguments -Wformat-security: NO 00:03:49.191 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:49.191 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:49.191 Compiler for C supports arguments -Wnested-externs: YES 00:03:49.191 Compiler for C supports arguments -Wold-style-definition: YES 00:03:49.191 Compiler for C supports arguments -Wpointer-arith: YES 00:03:49.191 Compiler for C supports arguments -Wsign-compare: YES 00:03:49.191 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:49.191 Compiler for C supports arguments -Wundef: YES 00:03:49.191 Compiler for C supports arguments -Wwrite-strings: YES 00:03:49.191 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:49.191 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:49.191 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:49.191 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:49.191 Program objdump found: YES (/usr/bin/objdump) 00:03:49.191 Compiler for C supports arguments -mavx512f: YES 00:03:49.191 Checking if "AVX512 checking" compiles: YES 00:03:49.191 Fetching value of define "__SSE4_2__" : 1 00:03:49.191 Fetching value of define "__AES__" : 1 00:03:49.191 Fetching value of define "__AVX__" : 1 00:03:49.191 Fetching value of define "__AVX2__" : 1 00:03:49.191 Fetching value of define "__AVX512BW__" : (undefined) 00:03:49.191 Fetching value of define "__AVX512CD__" : (undefined) 00:03:49.191 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:49.191 Fetching value of define "__AVX512F__" : (undefined) 00:03:49.191 Fetching value of define "__AVX512VL__" : (undefined) 00:03:49.191 Fetching value of define "__PCLMUL__" : 1 00:03:49.191 Fetching value of define "__RDRND__" : 1 00:03:49.191 Fetching value of define "__RDSEED__" : 1 00:03:49.191 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:49.191 Fetching value of define "__znver1__" : (undefined) 00:03:49.191 Fetching value of define "__znver2__" : (undefined) 00:03:49.191 Fetching value of define "__znver3__" : (undefined) 00:03:49.191 Fetching value of define "__znver4__" : (undefined) 00:03:49.191 Library asan found: YES 00:03:49.191 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:49.191 Message: lib/log: Defining dependency "log" 00:03:49.191 Message: lib/kvargs: Defining dependency "kvargs" 00:03:49.191 Message: lib/telemetry: Defining dependency "telemetry" 00:03:49.191 Library rt found: YES 00:03:49.191 Checking for function "getentropy" : NO 00:03:49.191 Message: lib/eal: Defining dependency "eal" 00:03:49.191 Message: lib/ring: Defining dependency "ring" 00:03:49.191 Message: lib/rcu: Defining dependency "rcu" 00:03:49.191 Message: lib/mempool: Defining dependency "mempool" 00:03:49.191 Message: lib/mbuf: Defining dependency "mbuf" 00:03:49.191 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:49.191 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:49.191 Compiler for C supports arguments -mpclmul: YES 00:03:49.191 Compiler for C supports arguments -maes: YES 00:03:49.191 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:49.191 Compiler for C supports arguments -mavx512bw: YES 00:03:49.191 Compiler for C supports arguments -mavx512dq: YES 00:03:49.191 Compiler for C supports arguments -mavx512vl: YES 00:03:49.191 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:49.191 Compiler for C supports arguments -mavx2: YES 00:03:49.191 Compiler for C supports arguments -mavx: YES 00:03:49.191 Message: lib/net: Defining dependency "net" 00:03:49.191 Message: lib/meter: Defining dependency "meter" 00:03:49.191 Message: lib/ethdev: Defining dependency "ethdev" 00:03:49.191 Message: lib/pci: Defining dependency "pci" 00:03:49.191 Message: lib/cmdline: Defining dependency "cmdline" 00:03:49.191 Message: lib/hash: Defining dependency "hash" 00:03:49.191 Message: lib/timer: Defining dependency "timer" 00:03:49.191 Message: lib/compressdev: Defining dependency "compressdev" 00:03:49.191 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:49.191 Message: lib/dmadev: Defining dependency "dmadev" 00:03:49.191 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:49.191 Message: lib/power: Defining dependency "power" 00:03:49.191 Message: lib/reorder: Defining dependency "reorder" 00:03:49.191 Message: lib/security: Defining dependency "security" 00:03:49.191 Has header "linux/userfaultfd.h" : YES 00:03:49.191 Has header "linux/vduse.h" : YES 00:03:49.191 Message: lib/vhost: Defining dependency "vhost" 00:03:49.191 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:49.191 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:49.191 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:49.191 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:49.191 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:49.191 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:49.191 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:49.191 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:49.191 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:49.191 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:49.191 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:49.191 Configuring doxy-api-html.conf using configuration 00:03:49.191 Configuring doxy-api-man.conf using configuration 00:03:49.191 Program mandb found: YES (/usr/bin/mandb) 00:03:49.191 Program sphinx-build found: NO 00:03:49.191 Configuring rte_build_config.h using configuration 00:03:49.191 Message: 00:03:49.191 ================= 00:03:49.191 Applications Enabled 00:03:49.191 ================= 00:03:49.191 00:03:49.191 apps: 00:03:49.191 00:03:49.191 00:03:49.191 Message: 00:03:49.191 ================= 00:03:49.191 Libraries Enabled 00:03:49.191 ================= 00:03:49.191 00:03:49.191 libs: 00:03:49.191 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:49.191 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:49.191 cryptodev, dmadev, power, reorder, security, vhost, 00:03:49.191 00:03:49.191 Message: 00:03:49.191 =============== 00:03:49.191 Drivers Enabled 00:03:49.191 =============== 00:03:49.191 00:03:49.191 common: 00:03:49.191 00:03:49.191 bus: 00:03:49.191 pci, vdev, 00:03:49.191 mempool: 00:03:49.191 ring, 00:03:49.191 dma: 00:03:49.191 00:03:49.191 net: 00:03:49.191 00:03:49.191 crypto: 00:03:49.191 00:03:49.191 compress: 00:03:49.192 00:03:49.192 vdpa: 00:03:49.192 00:03:49.192 00:03:49.192 Message: 00:03:49.192 ================= 00:03:49.192 Content Skipped 00:03:49.192 ================= 00:03:49.192 00:03:49.192 apps: 00:03:49.192 dumpcap: explicitly disabled via build config 00:03:49.192 graph: explicitly disabled via build config 00:03:49.192 pdump: explicitly disabled via build config 00:03:49.192 proc-info: explicitly disabled via build config 00:03:49.192 test-acl: explicitly disabled via build config 00:03:49.192 test-bbdev: explicitly disabled via build config 00:03:49.192 test-cmdline: explicitly disabled via build config 00:03:49.192 test-compress-perf: explicitly disabled via build config 00:03:49.192 test-crypto-perf: explicitly disabled via build config 00:03:49.192 test-dma-perf: explicitly disabled via build config 00:03:49.192 test-eventdev: explicitly disabled via build config 00:03:49.192 test-fib: explicitly disabled via build config 00:03:49.192 test-flow-perf: explicitly disabled via build config 00:03:49.192 test-gpudev: explicitly disabled via build config 00:03:49.192 test-mldev: explicitly disabled via build config 00:03:49.192 test-pipeline: explicitly disabled via build config 00:03:49.192 test-pmd: explicitly disabled via build config 00:03:49.192 test-regex: explicitly disabled via build config 00:03:49.192 test-sad: explicitly disabled via build config 00:03:49.192 test-security-perf: explicitly disabled via build config 00:03:49.192 00:03:49.192 libs: 00:03:49.192 argparse: explicitly disabled via build config 00:03:49.192 metrics: explicitly disabled via build config 00:03:49.192 acl: explicitly disabled via build config 00:03:49.192 bbdev: explicitly disabled via build config 00:03:49.192 bitratestats: explicitly disabled via build config 00:03:49.192 bpf: explicitly disabled via build config 00:03:49.192 cfgfile: explicitly disabled via build config 00:03:49.192 distributor: explicitly disabled via build config 00:03:49.192 efd: explicitly disabled via build config 00:03:49.192 eventdev: explicitly disabled via build config 00:03:49.192 dispatcher: explicitly disabled via build config 00:03:49.192 gpudev: explicitly disabled via build config 00:03:49.192 gro: explicitly disabled via build config 00:03:49.192 gso: explicitly disabled via build config 00:03:49.192 ip_frag: explicitly disabled via build config 00:03:49.192 jobstats: explicitly disabled via build config 00:03:49.192 latencystats: explicitly disabled via build config 00:03:49.192 lpm: explicitly disabled via build config 00:03:49.192 member: explicitly disabled via build config 00:03:49.192 pcapng: explicitly disabled via build config 00:03:49.192 rawdev: explicitly disabled via build config 00:03:49.192 regexdev: explicitly disabled via build config 00:03:49.192 mldev: explicitly disabled via build config 00:03:49.192 rib: explicitly disabled via build config 00:03:49.192 sched: explicitly disabled via build config 00:03:49.192 stack: explicitly disabled via build config 00:03:49.192 ipsec: explicitly disabled via build config 00:03:49.192 pdcp: explicitly disabled via build config 00:03:49.192 fib: explicitly disabled via build config 00:03:49.192 port: explicitly disabled via build config 00:03:49.192 pdump: explicitly disabled via build config 00:03:49.192 table: explicitly disabled via build config 00:03:49.192 pipeline: explicitly disabled via build config 00:03:49.192 graph: explicitly disabled via build config 00:03:49.192 node: explicitly disabled via build config 00:03:49.192 00:03:49.192 drivers: 00:03:49.192 common/cpt: not in enabled drivers build config 00:03:49.192 common/dpaax: not in enabled drivers build config 00:03:49.192 common/iavf: not in enabled drivers build config 00:03:49.192 common/idpf: not in enabled drivers build config 00:03:49.192 common/ionic: not in enabled drivers build config 00:03:49.192 common/mvep: not in enabled drivers build config 00:03:49.192 common/octeontx: not in enabled drivers build config 00:03:49.192 bus/auxiliary: not in enabled drivers build config 00:03:49.192 bus/cdx: not in enabled drivers build config 00:03:49.192 bus/dpaa: not in enabled drivers build config 00:03:49.192 bus/fslmc: not in enabled drivers build config 00:03:49.192 bus/ifpga: not in enabled drivers build config 00:03:49.192 bus/platform: not in enabled drivers build config 00:03:49.192 bus/uacce: not in enabled drivers build config 00:03:49.192 bus/vmbus: not in enabled drivers build config 00:03:49.192 common/cnxk: not in enabled drivers build config 00:03:49.192 common/mlx5: not in enabled drivers build config 00:03:49.192 common/nfp: not in enabled drivers build config 00:03:49.192 common/nitrox: not in enabled drivers build config 00:03:49.192 common/qat: not in enabled drivers build config 00:03:49.192 common/sfc_efx: not in enabled drivers build config 00:03:49.192 mempool/bucket: not in enabled drivers build config 00:03:49.192 mempool/cnxk: not in enabled drivers build config 00:03:49.192 mempool/dpaa: not in enabled drivers build config 00:03:49.192 mempool/dpaa2: not in enabled drivers build config 00:03:49.192 mempool/octeontx: not in enabled drivers build config 00:03:49.192 mempool/stack: not in enabled drivers build config 00:03:49.192 dma/cnxk: not in enabled drivers build config 00:03:49.192 dma/dpaa: not in enabled drivers build config 00:03:49.192 dma/dpaa2: not in enabled drivers build config 00:03:49.192 dma/hisilicon: not in enabled drivers build config 00:03:49.192 dma/idxd: not in enabled drivers build config 00:03:49.192 dma/ioat: not in enabled drivers build config 00:03:49.192 dma/skeleton: not in enabled drivers build config 00:03:49.192 net/af_packet: not in enabled drivers build config 00:03:49.192 net/af_xdp: not in enabled drivers build config 00:03:49.192 net/ark: not in enabled drivers build config 00:03:49.192 net/atlantic: not in enabled drivers build config 00:03:49.192 net/avp: not in enabled drivers build config 00:03:49.192 net/axgbe: not in enabled drivers build config 00:03:49.192 net/bnx2x: not in enabled drivers build config 00:03:49.192 net/bnxt: not in enabled drivers build config 00:03:49.192 net/bonding: not in enabled drivers build config 00:03:49.192 net/cnxk: not in enabled drivers build config 00:03:49.192 net/cpfl: not in enabled drivers build config 00:03:49.192 net/cxgbe: not in enabled drivers build config 00:03:49.192 net/dpaa: not in enabled drivers build config 00:03:49.192 net/dpaa2: not in enabled drivers build config 00:03:49.192 net/e1000: not in enabled drivers build config 00:03:49.192 net/ena: not in enabled drivers build config 00:03:49.192 net/enetc: not in enabled drivers build config 00:03:49.192 net/enetfec: not in enabled drivers build config 00:03:49.192 net/enic: not in enabled drivers build config 00:03:49.192 net/failsafe: not in enabled drivers build config 00:03:49.192 net/fm10k: not in enabled drivers build config 00:03:49.192 net/gve: not in enabled drivers build config 00:03:49.192 net/hinic: not in enabled drivers build config 00:03:49.192 net/hns3: not in enabled drivers build config 00:03:49.192 net/i40e: not in enabled drivers build config 00:03:49.192 net/iavf: not in enabled drivers build config 00:03:49.192 net/ice: not in enabled drivers build config 00:03:49.192 net/idpf: not in enabled drivers build config 00:03:49.192 net/igc: not in enabled drivers build config 00:03:49.192 net/ionic: not in enabled drivers build config 00:03:49.192 net/ipn3ke: not in enabled drivers build config 00:03:49.192 net/ixgbe: not in enabled drivers build config 00:03:49.192 net/mana: not in enabled drivers build config 00:03:49.192 net/memif: not in enabled drivers build config 00:03:49.192 net/mlx4: not in enabled drivers build config 00:03:49.192 net/mlx5: not in enabled drivers build config 00:03:49.192 net/mvneta: not in enabled drivers build config 00:03:49.192 net/mvpp2: not in enabled drivers build config 00:03:49.192 net/netvsc: not in enabled drivers build config 00:03:49.192 net/nfb: not in enabled drivers build config 00:03:49.192 net/nfp: not in enabled drivers build config 00:03:49.192 net/ngbe: not in enabled drivers build config 00:03:49.192 net/null: not in enabled drivers build config 00:03:49.192 net/octeontx: not in enabled drivers build config 00:03:49.192 net/octeon_ep: not in enabled drivers build config 00:03:49.192 net/pcap: not in enabled drivers build config 00:03:49.192 net/pfe: not in enabled drivers build config 00:03:49.192 net/qede: not in enabled drivers build config 00:03:49.192 net/ring: not in enabled drivers build config 00:03:49.192 net/sfc: not in enabled drivers build config 00:03:49.192 net/softnic: not in enabled drivers build config 00:03:49.192 net/tap: not in enabled drivers build config 00:03:49.192 net/thunderx: not in enabled drivers build config 00:03:49.192 net/txgbe: not in enabled drivers build config 00:03:49.192 net/vdev_netvsc: not in enabled drivers build config 00:03:49.192 net/vhost: not in enabled drivers build config 00:03:49.192 net/virtio: not in enabled drivers build config 00:03:49.192 net/vmxnet3: not in enabled drivers build config 00:03:49.192 raw/*: missing internal dependency, "rawdev" 00:03:49.192 crypto/armv8: not in enabled drivers build config 00:03:49.192 crypto/bcmfs: not in enabled drivers build config 00:03:49.192 crypto/caam_jr: not in enabled drivers build config 00:03:49.192 crypto/ccp: not in enabled drivers build config 00:03:49.192 crypto/cnxk: not in enabled drivers build config 00:03:49.192 crypto/dpaa_sec: not in enabled drivers build config 00:03:49.192 crypto/dpaa2_sec: not in enabled drivers build config 00:03:49.192 crypto/ipsec_mb: not in enabled drivers build config 00:03:49.192 crypto/mlx5: not in enabled drivers build config 00:03:49.192 crypto/mvsam: not in enabled drivers build config 00:03:49.192 crypto/nitrox: not in enabled drivers build config 00:03:49.192 crypto/null: not in enabled drivers build config 00:03:49.192 crypto/octeontx: not in enabled drivers build config 00:03:49.192 crypto/openssl: not in enabled drivers build config 00:03:49.192 crypto/scheduler: not in enabled drivers build config 00:03:49.192 crypto/uadk: not in enabled drivers build config 00:03:49.192 crypto/virtio: not in enabled drivers build config 00:03:49.192 compress/isal: not in enabled drivers build config 00:03:49.192 compress/mlx5: not in enabled drivers build config 00:03:49.192 compress/nitrox: not in enabled drivers build config 00:03:49.192 compress/octeontx: not in enabled drivers build config 00:03:49.192 compress/zlib: not in enabled drivers build config 00:03:49.192 regex/*: missing internal dependency, "regexdev" 00:03:49.192 ml/*: missing internal dependency, "mldev" 00:03:49.192 vdpa/ifc: not in enabled drivers build config 00:03:49.192 vdpa/mlx5: not in enabled drivers build config 00:03:49.192 vdpa/nfp: not in enabled drivers build config 00:03:49.193 vdpa/sfc: not in enabled drivers build config 00:03:49.193 event/*: missing internal dependency, "eventdev" 00:03:49.193 baseband/*: missing internal dependency, "bbdev" 00:03:49.193 gpu/*: missing internal dependency, "gpudev" 00:03:49.193 00:03:49.193 00:03:49.450 Build targets in project: 85 00:03:49.450 00:03:49.450 DPDK 24.03.0 00:03:49.450 00:03:49.450 User defined options 00:03:49.450 buildtype : debug 00:03:49.450 default_library : shared 00:03:49.450 libdir : lib 00:03:49.450 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:49.450 b_sanitize : address 00:03:49.450 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:49.450 c_link_args : 00:03:49.450 cpu_instruction_set: native 00:03:49.450 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:49.450 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:49.450 enable_docs : false 00:03:49.450 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:49.450 enable_kmods : false 00:03:49.450 max_lcores : 128 00:03:49.450 tests : false 00:03:49.450 00:03:49.450 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:50.386 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:50.386 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:50.386 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:50.386 [3/268] Linking static target lib/librte_kvargs.a 00:03:50.386 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:50.386 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:50.645 [6/268] Linking static target lib/librte_log.a 00:03:50.903 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:50.903 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:51.161 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:51.161 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.161 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:51.420 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:51.679 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:51.679 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:51.679 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.937 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:51.937 [17/268] Linking target lib/librte_log.so.24.1 00:03:51.937 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:52.195 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:52.195 [20/268] Linking static target lib/librte_telemetry.a 00:03:52.195 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:52.195 [22/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:52.452 [23/268] Linking target lib/librte_kvargs.so.24.1 00:03:52.724 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:52.724 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:52.724 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:52.724 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:53.290 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.290 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:53.290 [30/268] Linking target lib/librte_telemetry.so.24.1 00:03:53.290 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:53.548 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:53.548 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:53.548 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:53.548 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:53.806 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:53.806 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:54.370 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:54.370 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:54.370 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:54.627 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:54.627 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:54.627 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:54.884 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:55.142 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:55.142 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:55.142 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:55.708 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:55.708 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:55.966 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:55.966 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:55.966 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:55.966 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:56.532 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:56.532 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:56.532 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:56.532 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:57.097 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:57.097 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:57.097 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:57.097 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:57.355 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:57.355 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:57.614 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:57.614 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:57.873 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:57.873 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:58.131 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:58.131 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:58.398 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:58.671 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:58.671 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:58.671 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:58.671 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:58.671 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:58.930 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:58.930 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:58.930 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:59.189 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:59.447 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:59.447 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:59.705 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:59.705 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:59.963 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:59.963 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:59.963 [86/268] Linking static target lib/librte_eal.a 00:04:00.222 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:00.222 [88/268] Linking static target lib/librte_ring.a 00:04:00.480 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:00.739 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:00.739 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:00.739 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.998 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:00.998 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:00.998 [95/268] Linking static target lib/librte_rcu.a 00:04:00.998 [96/268] Linking static target lib/librte_mempool.a 00:04:01.256 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:01.514 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:01.514 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:01.514 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:01.773 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.773 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:02.031 [103/268] Linking static target lib/librte_mbuf.a 00:04:02.031 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:02.289 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:02.550 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:02.550 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:02.550 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.810 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:02.810 [110/268] Linking static target lib/librte_meter.a 00:04:02.810 [111/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:02.810 [112/268] Linking static target lib/librte_net.a 00:04:03.377 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:03.377 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.377 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:03.377 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.634 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.634 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:03.634 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:04.201 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:04.766 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:05.024 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:05.024 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:05.282 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:05.541 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:05.541 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:05.541 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:05.541 [128/268] Linking static target lib/librte_pci.a 00:04:05.799 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:05.799 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:06.058 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:06.058 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:06.058 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:06.058 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.316 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:06.316 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:06.316 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:06.316 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:06.575 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:06.575 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:06.575 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:06.575 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:06.575 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:06.575 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:06.575 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:06.834 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:07.769 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:07.769 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:07.769 [149/268] Linking static target lib/librte_cmdline.a 00:04:07.769 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:07.769 [151/268] Linking static target lib/librte_timer.a 00:04:08.028 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:08.286 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:08.286 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:08.852 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:08.852 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:08.852 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.852 [158/268] Linking static target lib/librte_ethdev.a 00:04:09.440 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:09.440 [160/268] Linking static target lib/librte_hash.a 00:04:09.440 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:09.440 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:09.440 [163/268] Linking static target lib/librte_compressdev.a 00:04:09.440 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:09.699 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:09.957 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.216 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:10.216 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:10.474 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:10.474 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:10.474 [171/268] Linking static target lib/librte_dmadev.a 00:04:10.733 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:10.733 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.992 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:10.992 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.559 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:11.559 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:11.817 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.817 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:12.075 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:12.075 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:12.075 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:12.075 [183/268] Linking static target lib/librte_cryptodev.a 00:04:12.333 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:12.591 [185/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:12.591 [186/268] Linking static target lib/librte_power.a 00:04:13.157 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:13.157 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:13.157 [189/268] Linking static target lib/librte_reorder.a 00:04:13.157 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:13.723 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:13.723 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:13.723 [193/268] Linking static target lib/librte_security.a 00:04:13.981 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.241 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.807 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:14.807 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.065 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:15.323 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:15.323 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.581 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:15.581 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:16.148 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:16.407 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:16.666 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:16.666 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:16.924 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:16.924 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:16.924 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:17.182 [210/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.442 [211/268] Linking target lib/librte_eal.so.24.1 00:04:17.442 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:17.442 [213/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:17.442 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:17.442 [215/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:17.442 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:17.442 [217/268] Linking static target drivers/librte_bus_vdev.a 00:04:17.700 [218/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:17.700 [219/268] Linking target lib/librte_ring.so.24.1 00:04:17.700 [220/268] Linking target lib/librte_pci.so.24.1 00:04:17.700 [221/268] Linking target lib/librte_timer.so.24.1 00:04:17.700 [222/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:17.700 [223/268] Linking target lib/librte_meter.so.24.1 00:04:17.960 [224/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:17.960 [225/268] Linking target lib/librte_dmadev.so.24.1 00:04:17.960 [226/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:17.960 [227/268] Linking target lib/librte_rcu.so.24.1 00:04:17.960 [228/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:17.960 [229/268] Linking target lib/librte_mempool.so.24.1 00:04:18.218 [230/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:18.218 [231/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:18.218 [232/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:18.218 [233/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:18.218 [234/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:18.218 [235/268] Linking static target drivers/librte_bus_pci.a 00:04:18.218 [236/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.218 [237/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:18.218 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:18.218 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:18.477 [240/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:18.477 [241/268] Linking target lib/librte_mbuf.so.24.1 00:04:18.477 [242/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:18.477 [243/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:18.477 [244/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:18.735 [245/268] Linking static target drivers/librte_mempool_ring.a 00:04:18.735 [246/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:18.735 [247/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:18.735 [248/268] Linking target lib/librte_net.so.24.1 00:04:18.735 [249/268] Linking target lib/librte_reorder.so.24.1 00:04:18.735 [250/268] Linking target lib/librte_compressdev.so.24.1 00:04:18.735 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:04:18.994 [252/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:18.994 [253/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:18.994 [254/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:18.994 [255/268] Linking target lib/librte_cmdline.so.24.1 00:04:18.994 [256/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.994 [257/268] Linking target lib/librte_hash.so.24.1 00:04:18.994 [258/268] Linking target lib/librte_security.so.24.1 00:04:19.253 [259/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:19.253 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:20.196 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.455 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:20.714 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:20.714 [264/268] Linking target lib/librte_power.so.24.1 00:04:24.899 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:24.899 [266/268] Linking static target lib/librte_vhost.a 00:04:25.834 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.092 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:26.092 INFO: autodetecting backend as ninja 00:04:26.092 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:48.060 CC lib/log/log_flags.o 00:04:48.060 CC lib/log/log.o 00:04:48.060 CC lib/log/log_deprecated.o 00:04:48.060 CC lib/ut/ut.o 00:04:48.060 CC lib/ut_mock/mock.o 00:04:48.060 LIB libspdk_ut.a 00:04:48.060 LIB libspdk_log.a 00:04:48.060 LIB libspdk_ut_mock.a 00:04:48.060 SO libspdk_ut.so.2.0 00:04:48.060 SO libspdk_ut_mock.so.6.0 00:04:48.060 SO libspdk_log.so.7.1 00:04:48.060 SYMLINK libspdk_ut.so 00:04:48.060 SYMLINK libspdk_ut_mock.so 00:04:48.060 SYMLINK libspdk_log.so 00:04:48.060 CC lib/ioat/ioat.o 00:04:48.060 CC lib/dma/dma.o 00:04:48.060 CC lib/util/base64.o 00:04:48.060 CC lib/util/bit_array.o 00:04:48.060 CC lib/util/cpuset.o 00:04:48.060 CC lib/util/crc16.o 00:04:48.060 CC lib/util/crc32.o 00:04:48.060 CC lib/util/crc32c.o 00:04:48.060 CXX lib/trace_parser/trace.o 00:04:48.060 CC lib/vfio_user/host/vfio_user_pci.o 00:04:48.060 CC lib/vfio_user/host/vfio_user.o 00:04:48.060 CC lib/util/crc32_ieee.o 00:04:48.060 CC lib/util/crc64.o 00:04:48.060 CC lib/util/dif.o 00:04:48.060 LIB libspdk_dma.a 00:04:48.060 SO libspdk_dma.so.5.0 00:04:48.060 CC lib/util/fd.o 00:04:48.060 SYMLINK libspdk_dma.so 00:04:48.060 CC lib/util/fd_group.o 00:04:48.060 CC lib/util/file.o 00:04:48.060 CC lib/util/hexlify.o 00:04:48.060 CC lib/util/iov.o 00:04:48.060 CC lib/util/math.o 00:04:48.060 LIB libspdk_vfio_user.a 00:04:48.060 LIB libspdk_ioat.a 00:04:48.060 SO libspdk_ioat.so.7.0 00:04:48.060 SO libspdk_vfio_user.so.5.0 00:04:48.060 CC lib/util/net.o 00:04:48.060 SYMLINK libspdk_ioat.so 00:04:48.060 CC lib/util/pipe.o 00:04:48.060 CC lib/util/strerror_tls.o 00:04:48.060 CC lib/util/string.o 00:04:48.060 SYMLINK libspdk_vfio_user.so 00:04:48.060 CC lib/util/uuid.o 00:04:48.060 CC lib/util/xor.o 00:04:48.060 CC lib/util/zipf.o 00:04:48.060 CC lib/util/md5.o 00:04:48.627 LIB libspdk_util.a 00:04:48.627 LIB libspdk_trace_parser.a 00:04:48.627 SO libspdk_trace_parser.so.6.0 00:04:48.627 SO libspdk_util.so.10.1 00:04:48.627 SYMLINK libspdk_trace_parser.so 00:04:48.627 SYMLINK libspdk_util.so 00:04:48.886 CC lib/env_dpdk/env.o 00:04:48.886 CC lib/vmd/vmd.o 00:04:48.886 CC lib/vmd/led.o 00:04:48.886 CC lib/env_dpdk/pci.o 00:04:48.886 CC lib/env_dpdk/memory.o 00:04:48.886 CC lib/env_dpdk/init.o 00:04:48.886 CC lib/idxd/idxd.o 00:04:48.886 CC lib/conf/conf.o 00:04:48.886 CC lib/json/json_parse.o 00:04:48.886 CC lib/rdma_utils/rdma_utils.o 00:04:49.145 CC lib/json/json_util.o 00:04:49.145 LIB libspdk_conf.a 00:04:49.145 CC lib/json/json_write.o 00:04:49.145 SO libspdk_conf.so.6.0 00:04:49.403 LIB libspdk_rdma_utils.a 00:04:49.403 SO libspdk_rdma_utils.so.1.0 00:04:49.403 SYMLINK libspdk_conf.so 00:04:49.403 CC lib/idxd/idxd_user.o 00:04:49.403 CC lib/idxd/idxd_kernel.o 00:04:49.403 SYMLINK libspdk_rdma_utils.so 00:04:49.403 CC lib/env_dpdk/threads.o 00:04:49.403 CC lib/env_dpdk/pci_ioat.o 00:04:49.403 CC lib/env_dpdk/pci_virtio.o 00:04:49.661 CC lib/env_dpdk/pci_vmd.o 00:04:49.661 CC lib/env_dpdk/pci_idxd.o 00:04:49.661 CC lib/env_dpdk/pci_event.o 00:04:49.661 LIB libspdk_json.a 00:04:49.661 CC lib/env_dpdk/sigbus_handler.o 00:04:49.661 CC lib/env_dpdk/pci_dpdk.o 00:04:49.661 SO libspdk_json.so.6.0 00:04:49.661 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:49.661 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:49.919 SYMLINK libspdk_json.so 00:04:49.919 LIB libspdk_vmd.a 00:04:49.919 SO libspdk_vmd.so.6.0 00:04:49.919 SYMLINK libspdk_vmd.so 00:04:49.919 LIB libspdk_idxd.a 00:04:49.919 CC lib/rdma_provider/common.o 00:04:49.919 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:49.919 SO libspdk_idxd.so.12.1 00:04:49.919 CC lib/jsonrpc/jsonrpc_server.o 00:04:49.919 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:49.919 CC lib/jsonrpc/jsonrpc_client.o 00:04:49.919 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:50.177 SYMLINK libspdk_idxd.so 00:04:50.177 LIB libspdk_rdma_provider.a 00:04:50.177 SO libspdk_rdma_provider.so.7.0 00:04:50.435 LIB libspdk_jsonrpc.a 00:04:50.435 SYMLINK libspdk_rdma_provider.so 00:04:50.435 SO libspdk_jsonrpc.so.6.0 00:04:50.435 SYMLINK libspdk_jsonrpc.so 00:04:50.694 CC lib/rpc/rpc.o 00:04:50.953 LIB libspdk_rpc.a 00:04:50.953 LIB libspdk_env_dpdk.a 00:04:50.953 SO libspdk_rpc.so.6.0 00:04:50.953 SYMLINK libspdk_rpc.so 00:04:51.212 SO libspdk_env_dpdk.so.15.1 00:04:51.212 SYMLINK libspdk_env_dpdk.so 00:04:51.212 CC lib/keyring/keyring.o 00:04:51.212 CC lib/keyring/keyring_rpc.o 00:04:51.212 CC lib/trace/trace.o 00:04:51.212 CC lib/trace/trace_flags.o 00:04:51.212 CC lib/trace/trace_rpc.o 00:04:51.212 CC lib/notify/notify.o 00:04:51.212 CC lib/notify/notify_rpc.o 00:04:51.471 LIB libspdk_notify.a 00:04:51.471 SO libspdk_notify.so.6.0 00:04:51.471 SYMLINK libspdk_notify.so 00:04:51.730 LIB libspdk_trace.a 00:04:51.730 LIB libspdk_keyring.a 00:04:51.730 SO libspdk_trace.so.11.0 00:04:51.730 SO libspdk_keyring.so.2.0 00:04:51.730 SYMLINK libspdk_trace.so 00:04:51.730 SYMLINK libspdk_keyring.so 00:04:51.989 CC lib/sock/sock.o 00:04:51.989 CC lib/sock/sock_rpc.o 00:04:51.989 CC lib/thread/iobuf.o 00:04:51.989 CC lib/thread/thread.o 00:04:52.556 LIB libspdk_sock.a 00:04:52.815 SO libspdk_sock.so.10.0 00:04:52.815 SYMLINK libspdk_sock.so 00:04:53.073 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:53.073 CC lib/nvme/nvme_ctrlr.o 00:04:53.073 CC lib/nvme/nvme_ns_cmd.o 00:04:53.073 CC lib/nvme/nvme_fabric.o 00:04:53.073 CC lib/nvme/nvme_ns.o 00:04:53.073 CC lib/nvme/nvme_pcie.o 00:04:53.073 CC lib/nvme/nvme_qpair.o 00:04:53.073 CC lib/nvme/nvme.o 00:04:53.073 CC lib/nvme/nvme_pcie_common.o 00:04:54.006 CC lib/nvme/nvme_quirks.o 00:04:54.265 CC lib/nvme/nvme_transport.o 00:04:54.265 CC lib/nvme/nvme_discovery.o 00:04:54.523 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:54.523 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:54.523 CC lib/nvme/nvme_tcp.o 00:04:54.780 CC lib/nvme/nvme_opal.o 00:04:55.038 CC lib/nvme/nvme_io_msg.o 00:04:55.038 CC lib/nvme/nvme_poll_group.o 00:04:55.296 LIB libspdk_thread.a 00:04:55.296 SO libspdk_thread.so.11.0 00:04:55.554 CC lib/nvme/nvme_zns.o 00:04:55.554 SYMLINK libspdk_thread.so 00:04:55.554 CC lib/nvme/nvme_stubs.o 00:04:55.811 CC lib/accel/accel.o 00:04:55.811 CC lib/blob/blobstore.o 00:04:56.068 CC lib/init/json_config.o 00:04:56.068 CC lib/init/subsystem.o 00:04:56.068 CC lib/virtio/virtio.o 00:04:56.068 CC lib/virtio/virtio_vhost_user.o 00:04:56.326 CC lib/blob/request.o 00:04:56.326 CC lib/init/subsystem_rpc.o 00:04:56.583 CC lib/init/rpc.o 00:04:56.583 CC lib/accel/accel_rpc.o 00:04:56.583 CC lib/accel/accel_sw.o 00:04:56.839 CC lib/virtio/virtio_vfio_user.o 00:04:56.839 CC lib/fsdev/fsdev.o 00:04:56.839 LIB libspdk_init.a 00:04:56.839 CC lib/virtio/virtio_pci.o 00:04:56.839 CC lib/blob/zeroes.o 00:04:56.839 SO libspdk_init.so.6.0 00:04:56.839 CC lib/blob/blob_bs_dev.o 00:04:56.839 SYMLINK libspdk_init.so 00:04:56.839 CC lib/nvme/nvme_auth.o 00:04:57.096 CC lib/nvme/nvme_cuse.o 00:04:57.354 CC lib/nvme/nvme_rdma.o 00:04:57.354 CC lib/fsdev/fsdev_io.o 00:04:57.354 LIB libspdk_accel.a 00:04:57.354 LIB libspdk_virtio.a 00:04:57.354 SO libspdk_virtio.so.7.0 00:04:57.354 SO libspdk_accel.so.16.0 00:04:57.354 CC lib/event/app.o 00:04:57.612 SYMLINK libspdk_virtio.so 00:04:57.612 CC lib/event/reactor.o 00:04:57.612 CC lib/event/log_rpc.o 00:04:57.612 SYMLINK libspdk_accel.so 00:04:57.612 CC lib/event/app_rpc.o 00:04:57.612 CC lib/fsdev/fsdev_rpc.o 00:04:57.870 CC lib/event/scheduler_static.o 00:04:57.871 LIB libspdk_fsdev.a 00:04:57.871 SO libspdk_fsdev.so.2.0 00:04:58.171 SYMLINK libspdk_fsdev.so 00:04:58.172 CC lib/bdev/bdev.o 00:04:58.172 CC lib/bdev/bdev_rpc.o 00:04:58.172 CC lib/bdev/bdev_zone.o 00:04:58.172 CC lib/bdev/part.o 00:04:58.172 CC lib/bdev/scsi_nvme.o 00:04:58.172 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:58.429 LIB libspdk_event.a 00:04:58.688 SO libspdk_event.so.14.0 00:04:58.688 SYMLINK libspdk_event.so 00:04:59.254 LIB libspdk_nvme.a 00:04:59.255 LIB libspdk_fuse_dispatcher.a 00:04:59.513 SO libspdk_fuse_dispatcher.so.1.0 00:04:59.513 SO libspdk_nvme.so.15.0 00:04:59.513 SYMLINK libspdk_fuse_dispatcher.so 00:04:59.771 SYMLINK libspdk_nvme.so 00:05:01.144 LIB libspdk_blob.a 00:05:01.402 SO libspdk_blob.so.11.0 00:05:01.402 SYMLINK libspdk_blob.so 00:05:01.661 CC lib/blobfs/blobfs.o 00:05:01.661 CC lib/blobfs/tree.o 00:05:01.661 CC lib/lvol/lvol.o 00:05:02.228 LIB libspdk_bdev.a 00:05:02.228 SO libspdk_bdev.so.17.0 00:05:02.486 SYMLINK libspdk_bdev.so 00:05:02.744 CC lib/ftl/ftl_core.o 00:05:02.744 CC lib/ftl/ftl_init.o 00:05:02.744 CC lib/ftl/ftl_layout.o 00:05:02.744 CC lib/nvmf/ctrlr.o 00:05:02.744 CC lib/ftl/ftl_debug.o 00:05:02.744 CC lib/nbd/nbd.o 00:05:02.744 CC lib/scsi/dev.o 00:05:02.744 CC lib/ublk/ublk.o 00:05:03.002 CC lib/ublk/ublk_rpc.o 00:05:03.002 CC lib/ftl/ftl_io.o 00:05:03.002 LIB libspdk_lvol.a 00:05:03.002 SO libspdk_lvol.so.10.0 00:05:03.260 CC lib/ftl/ftl_sb.o 00:05:03.260 CC lib/scsi/lun.o 00:05:03.260 LIB libspdk_blobfs.a 00:05:03.260 SO libspdk_blobfs.so.10.0 00:05:03.260 SYMLINK libspdk_lvol.so 00:05:03.260 CC lib/ftl/ftl_l2p.o 00:05:03.260 SYMLINK libspdk_blobfs.so 00:05:03.260 CC lib/ftl/ftl_l2p_flat.o 00:05:03.260 CC lib/ftl/ftl_nv_cache.o 00:05:03.260 CC lib/scsi/port.o 00:05:03.518 CC lib/scsi/scsi.o 00:05:03.518 CC lib/scsi/scsi_bdev.o 00:05:03.518 CC lib/scsi/scsi_pr.o 00:05:03.518 CC lib/nbd/nbd_rpc.o 00:05:03.518 CC lib/nvmf/ctrlr_discovery.o 00:05:03.518 CC lib/nvmf/ctrlr_bdev.o 00:05:03.777 CC lib/nvmf/subsystem.o 00:05:03.777 LIB libspdk_ublk.a 00:05:03.777 CC lib/scsi/scsi_rpc.o 00:05:03.777 SO libspdk_ublk.so.3.0 00:05:03.777 LIB libspdk_nbd.a 00:05:03.777 SYMLINK libspdk_ublk.so 00:05:03.777 CC lib/scsi/task.o 00:05:03.777 SO libspdk_nbd.so.7.0 00:05:04.035 SYMLINK libspdk_nbd.so 00:05:04.035 CC lib/ftl/ftl_band.o 00:05:04.035 CC lib/ftl/ftl_band_ops.o 00:05:04.035 CC lib/ftl/ftl_writer.o 00:05:04.035 CC lib/ftl/ftl_rq.o 00:05:04.295 LIB libspdk_scsi.a 00:05:04.295 SO libspdk_scsi.so.9.0 00:05:04.295 CC lib/nvmf/nvmf.o 00:05:04.295 CC lib/nvmf/nvmf_rpc.o 00:05:04.295 SYMLINK libspdk_scsi.so 00:05:04.295 CC lib/nvmf/transport.o 00:05:04.573 CC lib/nvmf/tcp.o 00:05:04.573 CC lib/ftl/ftl_reloc.o 00:05:04.831 CC lib/nvmf/stubs.o 00:05:04.831 CC lib/nvmf/mdns_server.o 00:05:05.089 CC lib/nvmf/rdma.o 00:05:05.347 CC lib/ftl/ftl_l2p_cache.o 00:05:05.605 CC lib/nvmf/auth.o 00:05:05.605 CC lib/ftl/ftl_p2l.o 00:05:05.605 CC lib/ftl/ftl_p2l_log.o 00:05:05.605 CC lib/ftl/mngt/ftl_mngt.o 00:05:05.605 CC lib/iscsi/conn.o 00:05:05.605 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:06.171 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:06.171 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:06.171 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:06.429 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:06.429 CC lib/vhost/vhost.o 00:05:06.429 CC lib/iscsi/init_grp.o 00:05:06.429 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:06.429 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:06.687 CC lib/iscsi/iscsi.o 00:05:06.946 CC lib/vhost/vhost_rpc.o 00:05:06.946 CC lib/vhost/vhost_scsi.o 00:05:06.946 CC lib/iscsi/param.o 00:05:06.946 CC lib/iscsi/portal_grp.o 00:05:06.946 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:06.946 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:07.204 CC lib/vhost/vhost_blk.o 00:05:07.463 CC lib/vhost/rte_vhost_user.o 00:05:07.463 CC lib/iscsi/tgt_node.o 00:05:07.463 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:07.721 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:07.978 CC lib/iscsi/iscsi_subsystem.o 00:05:08.235 CC lib/iscsi/iscsi_rpc.o 00:05:08.235 CC lib/iscsi/task.o 00:05:08.493 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:08.493 CC lib/ftl/utils/ftl_conf.o 00:05:08.493 CC lib/ftl/utils/ftl_md.o 00:05:08.493 CC lib/ftl/utils/ftl_mempool.o 00:05:08.750 CC lib/ftl/utils/ftl_bitmap.o 00:05:08.750 LIB libspdk_vhost.a 00:05:08.750 CC lib/ftl/utils/ftl_property.o 00:05:08.750 SO libspdk_vhost.so.8.0 00:05:08.750 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:08.750 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:09.008 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:09.008 SYMLINK libspdk_vhost.so 00:05:09.008 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:09.008 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:09.008 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:09.267 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:09.267 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:09.267 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:09.525 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:09.525 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:09.525 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:09.525 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:09.525 CC lib/ftl/base/ftl_base_dev.o 00:05:09.525 CC lib/ftl/base/ftl_base_bdev.o 00:05:09.783 CC lib/ftl/ftl_trace.o 00:05:09.783 LIB libspdk_iscsi.a 00:05:09.783 LIB libspdk_nvmf.a 00:05:10.041 SO libspdk_iscsi.so.8.0 00:05:10.041 LIB libspdk_ftl.a 00:05:10.041 SO libspdk_nvmf.so.20.0 00:05:10.299 SYMLINK libspdk_iscsi.so 00:05:10.557 SO libspdk_ftl.so.9.0 00:05:10.557 SYMLINK libspdk_nvmf.so 00:05:10.815 SYMLINK libspdk_ftl.so 00:05:11.381 CC module/env_dpdk/env_dpdk_rpc.o 00:05:11.381 CC module/accel/dsa/accel_dsa.o 00:05:11.381 CC module/accel/iaa/accel_iaa.o 00:05:11.381 CC module/sock/posix/posix.o 00:05:11.381 CC module/accel/ioat/accel_ioat.o 00:05:11.381 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:11.381 CC module/accel/error/accel_error.o 00:05:11.381 CC module/fsdev/aio/fsdev_aio.o 00:05:11.381 CC module/blob/bdev/blob_bdev.o 00:05:11.381 CC module/keyring/file/keyring.o 00:05:11.381 LIB libspdk_env_dpdk_rpc.a 00:05:11.638 SO libspdk_env_dpdk_rpc.so.6.0 00:05:11.638 SYMLINK libspdk_env_dpdk_rpc.so 00:05:11.638 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:11.638 CC module/keyring/file/keyring_rpc.o 00:05:11.638 LIB libspdk_scheduler_dynamic.a 00:05:11.638 CC module/accel/error/accel_error_rpc.o 00:05:11.895 SO libspdk_scheduler_dynamic.so.4.0 00:05:11.895 CC module/accel/ioat/accel_ioat_rpc.o 00:05:11.895 CC module/accel/iaa/accel_iaa_rpc.o 00:05:11.895 SYMLINK libspdk_scheduler_dynamic.so 00:05:11.895 LIB libspdk_keyring_file.a 00:05:11.895 LIB libspdk_accel_iaa.a 00:05:11.895 SO libspdk_keyring_file.so.2.0 00:05:11.895 LIB libspdk_blob_bdev.a 00:05:11.895 CC module/accel/dsa/accel_dsa_rpc.o 00:05:11.895 LIB libspdk_accel_error.a 00:05:12.153 SO libspdk_accel_iaa.so.3.0 00:05:12.153 SO libspdk_blob_bdev.so.11.0 00:05:12.153 LIB libspdk_accel_ioat.a 00:05:12.153 SO libspdk_accel_error.so.2.0 00:05:12.153 CC module/keyring/linux/keyring.o 00:05:12.153 SYMLINK libspdk_keyring_file.so 00:05:12.153 SO libspdk_accel_ioat.so.6.0 00:05:12.153 CC module/keyring/linux/keyring_rpc.o 00:05:12.153 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:12.153 SYMLINK libspdk_blob_bdev.so 00:05:12.153 SYMLINK libspdk_accel_iaa.so 00:05:12.153 SYMLINK libspdk_accel_error.so 00:05:12.153 SYMLINK libspdk_accel_ioat.so 00:05:12.153 CC module/fsdev/aio/linux_aio_mgr.o 00:05:12.153 LIB libspdk_accel_dsa.a 00:05:12.411 SO libspdk_accel_dsa.so.5.0 00:05:12.411 LIB libspdk_keyring_linux.a 00:05:12.411 LIB libspdk_scheduler_dpdk_governor.a 00:05:12.411 SO libspdk_keyring_linux.so.1.0 00:05:12.411 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:12.411 SYMLINK libspdk_accel_dsa.so 00:05:12.411 CC module/scheduler/gscheduler/gscheduler.o 00:05:12.411 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:12.411 SYMLINK libspdk_keyring_linux.so 00:05:12.412 CC module/bdev/error/vbdev_error.o 00:05:12.412 CC module/bdev/delay/vbdev_delay.o 00:05:12.412 CC module/blobfs/bdev/blobfs_bdev.o 00:05:12.670 LIB libspdk_scheduler_gscheduler.a 00:05:12.670 CC module/bdev/gpt/gpt.o 00:05:12.670 CC module/bdev/lvol/vbdev_lvol.o 00:05:12.670 LIB libspdk_fsdev_aio.a 00:05:12.670 SO libspdk_scheduler_gscheduler.so.4.0 00:05:12.670 CC module/bdev/malloc/bdev_malloc.o 00:05:12.670 CC module/bdev/null/bdev_null.o 00:05:12.670 SO libspdk_fsdev_aio.so.1.0 00:05:12.670 SYMLINK libspdk_scheduler_gscheduler.so 00:05:12.928 CC module/bdev/gpt/vbdev_gpt.o 00:05:12.928 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:12.928 SYMLINK libspdk_fsdev_aio.so 00:05:12.928 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:12.928 CC module/bdev/error/vbdev_error_rpc.o 00:05:12.928 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:12.928 LIB libspdk_sock_posix.a 00:05:12.928 SO libspdk_sock_posix.so.6.0 00:05:12.928 LIB libspdk_blobfs_bdev.a 00:05:12.928 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:13.186 SO libspdk_blobfs_bdev.so.6.0 00:05:13.186 CC module/bdev/null/bdev_null_rpc.o 00:05:13.186 SYMLINK libspdk_sock_posix.so 00:05:13.186 LIB libspdk_bdev_error.a 00:05:13.186 SYMLINK libspdk_blobfs_bdev.so 00:05:13.186 SO libspdk_bdev_error.so.6.0 00:05:13.186 SYMLINK libspdk_bdev_error.so 00:05:13.186 LIB libspdk_bdev_delay.a 00:05:13.186 LIB libspdk_bdev_malloc.a 00:05:13.186 LIB libspdk_bdev_gpt.a 00:05:13.186 SO libspdk_bdev_delay.so.6.0 00:05:13.186 LIB libspdk_bdev_null.a 00:05:13.186 CC module/bdev/nvme/bdev_nvme.o 00:05:13.443 SO libspdk_bdev_gpt.so.6.0 00:05:13.443 SO libspdk_bdev_malloc.so.6.0 00:05:13.443 CC module/bdev/passthru/vbdev_passthru.o 00:05:13.443 SO libspdk_bdev_null.so.6.0 00:05:13.443 SYMLINK libspdk_bdev_delay.so 00:05:13.443 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:13.443 CC module/bdev/raid/bdev_raid.o 00:05:13.443 SYMLINK libspdk_bdev_null.so 00:05:13.443 SYMLINK libspdk_bdev_gpt.so 00:05:13.443 SYMLINK libspdk_bdev_malloc.so 00:05:13.444 CC module/bdev/raid/bdev_raid_rpc.o 00:05:13.444 CC module/bdev/split/vbdev_split.o 00:05:13.444 CC module/bdev/raid/bdev_raid_sb.o 00:05:13.444 LIB libspdk_bdev_lvol.a 00:05:13.701 CC module/bdev/raid/raid0.o 00:05:13.701 SO libspdk_bdev_lvol.so.6.0 00:05:13.701 CC module/bdev/aio/bdev_aio.o 00:05:13.701 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:13.701 CC module/bdev/split/vbdev_split_rpc.o 00:05:13.701 SYMLINK libspdk_bdev_lvol.so 00:05:13.701 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:13.701 CC module/bdev/raid/raid1.o 00:05:13.701 LIB libspdk_bdev_passthru.a 00:05:13.701 CC module/bdev/nvme/nvme_rpc.o 00:05:13.959 SO libspdk_bdev_passthru.so.6.0 00:05:13.959 SYMLINK libspdk_bdev_passthru.so 00:05:13.959 CC module/bdev/nvme/bdev_mdns_client.o 00:05:13.959 LIB libspdk_bdev_split.a 00:05:13.959 SO libspdk_bdev_split.so.6.0 00:05:13.959 SYMLINK libspdk_bdev_split.so 00:05:13.959 CC module/bdev/nvme/vbdev_opal.o 00:05:14.218 CC module/bdev/ftl/bdev_ftl.o 00:05:14.218 CC module/bdev/aio/bdev_aio_rpc.o 00:05:14.218 CC module/bdev/raid/concat.o 00:05:14.475 CC module/bdev/raid/raid5f.o 00:05:14.475 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:14.475 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:14.475 CC module/bdev/iscsi/bdev_iscsi.o 00:05:14.475 LIB libspdk_bdev_aio.a 00:05:14.475 SO libspdk_bdev_aio.so.6.0 00:05:14.733 SYMLINK libspdk_bdev_aio.so 00:05:14.733 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:14.733 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:14.733 LIB libspdk_bdev_zone_block.a 00:05:14.733 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:14.733 SO libspdk_bdev_zone_block.so.6.0 00:05:14.733 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:14.992 SYMLINK libspdk_bdev_zone_block.so 00:05:14.992 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:14.992 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:15.250 LIB libspdk_bdev_ftl.a 00:05:15.250 SO libspdk_bdev_ftl.so.6.0 00:05:15.250 LIB libspdk_bdev_iscsi.a 00:05:15.250 SO libspdk_bdev_iscsi.so.6.0 00:05:15.250 SYMLINK libspdk_bdev_ftl.so 00:05:15.250 SYMLINK libspdk_bdev_iscsi.so 00:05:15.509 LIB libspdk_bdev_virtio.a 00:05:15.509 LIB libspdk_bdev_raid.a 00:05:15.509 SO libspdk_bdev_virtio.so.6.0 00:05:15.509 SO libspdk_bdev_raid.so.6.0 00:05:15.509 SYMLINK libspdk_bdev_virtio.so 00:05:15.767 SYMLINK libspdk_bdev_raid.so 00:05:18.302 LIB libspdk_bdev_nvme.a 00:05:18.302 SO libspdk_bdev_nvme.so.7.1 00:05:18.302 SYMLINK libspdk_bdev_nvme.so 00:05:18.561 CC module/event/subsystems/keyring/keyring.o 00:05:18.561 CC module/event/subsystems/fsdev/fsdev.o 00:05:18.561 CC module/event/subsystems/scheduler/scheduler.o 00:05:18.561 CC module/event/subsystems/vmd/vmd.o 00:05:18.561 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:18.561 CC module/event/subsystems/sock/sock.o 00:05:18.561 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:18.561 CC module/event/subsystems/iobuf/iobuf.o 00:05:18.561 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:18.820 LIB libspdk_event_fsdev.a 00:05:18.820 LIB libspdk_event_keyring.a 00:05:18.820 LIB libspdk_event_scheduler.a 00:05:18.820 SO libspdk_event_fsdev.so.1.0 00:05:18.820 SO libspdk_event_keyring.so.1.0 00:05:18.820 LIB libspdk_event_iobuf.a 00:05:18.820 LIB libspdk_event_vmd.a 00:05:18.820 SO libspdk_event_scheduler.so.4.0 00:05:18.820 SO libspdk_event_iobuf.so.3.0 00:05:18.820 LIB libspdk_event_sock.a 00:05:18.820 SYMLINK libspdk_event_keyring.so 00:05:18.820 SO libspdk_event_vmd.so.6.0 00:05:18.820 SO libspdk_event_sock.so.5.0 00:05:18.820 SYMLINK libspdk_event_fsdev.so 00:05:18.820 SYMLINK libspdk_event_scheduler.so 00:05:18.820 LIB libspdk_event_vhost_blk.a 00:05:18.820 SYMLINK libspdk_event_iobuf.so 00:05:18.820 SO libspdk_event_vhost_blk.so.3.0 00:05:18.820 SYMLINK libspdk_event_vmd.so 00:05:18.820 SYMLINK libspdk_event_sock.so 00:05:18.820 SYMLINK libspdk_event_vhost_blk.so 00:05:19.080 CC module/event/subsystems/accel/accel.o 00:05:19.339 LIB libspdk_event_accel.a 00:05:19.339 SO libspdk_event_accel.so.6.0 00:05:19.339 SYMLINK libspdk_event_accel.so 00:05:19.598 CC module/event/subsystems/bdev/bdev.o 00:05:19.857 LIB libspdk_event_bdev.a 00:05:19.857 SO libspdk_event_bdev.so.6.0 00:05:19.857 SYMLINK libspdk_event_bdev.so 00:05:20.115 CC module/event/subsystems/nbd/nbd.o 00:05:20.115 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:20.115 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:20.115 CC module/event/subsystems/ublk/ublk.o 00:05:20.115 CC module/event/subsystems/scsi/scsi.o 00:05:20.115 LIB libspdk_event_nbd.a 00:05:20.373 LIB libspdk_event_ublk.a 00:05:20.373 SO libspdk_event_nbd.so.6.0 00:05:20.373 SO libspdk_event_ublk.so.3.0 00:05:20.373 LIB libspdk_event_scsi.a 00:05:20.373 SYMLINK libspdk_event_ublk.so 00:05:20.373 SYMLINK libspdk_event_nbd.so 00:05:20.373 SO libspdk_event_scsi.so.6.0 00:05:20.373 LIB libspdk_event_nvmf.a 00:05:20.373 SO libspdk_event_nvmf.so.6.0 00:05:20.373 SYMLINK libspdk_event_scsi.so 00:05:20.373 SYMLINK libspdk_event_nvmf.so 00:05:20.632 CC module/event/subsystems/iscsi/iscsi.o 00:05:20.632 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:20.890 LIB libspdk_event_vhost_scsi.a 00:05:20.890 SO libspdk_event_vhost_scsi.so.3.0 00:05:20.891 LIB libspdk_event_iscsi.a 00:05:20.891 SO libspdk_event_iscsi.so.6.0 00:05:20.891 SYMLINK libspdk_event_vhost_scsi.so 00:05:20.891 SYMLINK libspdk_event_iscsi.so 00:05:21.149 SO libspdk.so.6.0 00:05:21.149 SYMLINK libspdk.so 00:05:21.409 CC app/spdk_lspci/spdk_lspci.o 00:05:21.409 CC app/trace_record/trace_record.o 00:05:21.409 CXX app/trace/trace.o 00:05:21.409 CC app/spdk_nvme_perf/perf.o 00:05:21.409 CC app/spdk_nvme_identify/identify.o 00:05:21.409 CC app/iscsi_tgt/iscsi_tgt.o 00:05:21.409 CC app/nvmf_tgt/nvmf_main.o 00:05:21.409 CC test/thread/poller_perf/poller_perf.o 00:05:21.409 CC examples/util/zipf/zipf.o 00:05:21.409 CC app/spdk_tgt/spdk_tgt.o 00:05:21.693 LINK spdk_lspci 00:05:21.694 LINK spdk_trace_record 00:05:21.694 LINK zipf 00:05:21.694 LINK poller_perf 00:05:21.694 LINK nvmf_tgt 00:05:21.951 LINK iscsi_tgt 00:05:21.951 CC app/spdk_nvme_discover/discovery_aer.o 00:05:21.951 LINK spdk_tgt 00:05:21.951 CC app/spdk_top/spdk_top.o 00:05:22.209 LINK spdk_trace 00:05:22.209 LINK spdk_nvme_discover 00:05:22.209 CC examples/ioat/perf/perf.o 00:05:22.209 CC app/spdk_dd/spdk_dd.o 00:05:22.468 CC test/dma/test_dma/test_dma.o 00:05:22.468 CC app/fio/nvme/fio_plugin.o 00:05:22.468 CC test/app/bdev_svc/bdev_svc.o 00:05:22.468 LINK spdk_nvme_perf 00:05:22.726 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:22.726 CC app/fio/bdev/fio_plugin.o 00:05:22.726 LINK ioat_perf 00:05:22.726 LINK bdev_svc 00:05:22.984 LINK spdk_dd 00:05:22.984 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:23.242 CC examples/ioat/verify/verify.o 00:05:23.242 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:23.243 TEST_HEADER include/spdk/accel.h 00:05:23.243 TEST_HEADER include/spdk/accel_module.h 00:05:23.243 TEST_HEADER include/spdk/assert.h 00:05:23.243 LINK spdk_nvme_identify 00:05:23.243 TEST_HEADER include/spdk/barrier.h 00:05:23.243 TEST_HEADER include/spdk/base64.h 00:05:23.243 TEST_HEADER include/spdk/bdev.h 00:05:23.243 TEST_HEADER include/spdk/bdev_module.h 00:05:23.243 TEST_HEADER include/spdk/bdev_zone.h 00:05:23.243 TEST_HEADER include/spdk/bit_array.h 00:05:23.243 TEST_HEADER include/spdk/bit_pool.h 00:05:23.243 TEST_HEADER include/spdk/blob_bdev.h 00:05:23.243 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:23.243 TEST_HEADER include/spdk/blobfs.h 00:05:23.243 TEST_HEADER include/spdk/blob.h 00:05:23.243 TEST_HEADER include/spdk/conf.h 00:05:23.243 TEST_HEADER include/spdk/config.h 00:05:23.243 TEST_HEADER include/spdk/cpuset.h 00:05:23.243 TEST_HEADER include/spdk/crc16.h 00:05:23.243 TEST_HEADER include/spdk/crc32.h 00:05:23.243 TEST_HEADER include/spdk/crc64.h 00:05:23.243 LINK test_dma 00:05:23.243 TEST_HEADER include/spdk/dif.h 00:05:23.501 TEST_HEADER include/spdk/dma.h 00:05:23.501 TEST_HEADER include/spdk/endian.h 00:05:23.501 TEST_HEADER include/spdk/env_dpdk.h 00:05:23.501 TEST_HEADER include/spdk/env.h 00:05:23.501 TEST_HEADER include/spdk/event.h 00:05:23.501 TEST_HEADER include/spdk/fd_group.h 00:05:23.501 TEST_HEADER include/spdk/fd.h 00:05:23.501 TEST_HEADER include/spdk/file.h 00:05:23.501 TEST_HEADER include/spdk/fsdev.h 00:05:23.501 TEST_HEADER include/spdk/fsdev_module.h 00:05:23.501 TEST_HEADER include/spdk/ftl.h 00:05:23.501 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:23.501 TEST_HEADER include/spdk/gpt_spec.h 00:05:23.501 TEST_HEADER include/spdk/hexlify.h 00:05:23.501 TEST_HEADER include/spdk/histogram_data.h 00:05:23.501 TEST_HEADER include/spdk/idxd.h 00:05:23.501 TEST_HEADER include/spdk/idxd_spec.h 00:05:23.501 TEST_HEADER include/spdk/init.h 00:05:23.501 TEST_HEADER include/spdk/ioat.h 00:05:23.501 TEST_HEADER include/spdk/ioat_spec.h 00:05:23.501 TEST_HEADER include/spdk/iscsi_spec.h 00:05:23.501 TEST_HEADER include/spdk/json.h 00:05:23.501 TEST_HEADER include/spdk/jsonrpc.h 00:05:23.501 TEST_HEADER include/spdk/keyring.h 00:05:23.501 TEST_HEADER include/spdk/keyring_module.h 00:05:23.501 TEST_HEADER include/spdk/likely.h 00:05:23.501 TEST_HEADER include/spdk/log.h 00:05:23.501 TEST_HEADER include/spdk/lvol.h 00:05:23.501 TEST_HEADER include/spdk/md5.h 00:05:23.501 LINK nvme_fuzz 00:05:23.501 TEST_HEADER include/spdk/memory.h 00:05:23.501 TEST_HEADER include/spdk/mmio.h 00:05:23.501 TEST_HEADER include/spdk/nbd.h 00:05:23.501 TEST_HEADER include/spdk/net.h 00:05:23.501 TEST_HEADER include/spdk/notify.h 00:05:23.501 TEST_HEADER include/spdk/nvme.h 00:05:23.501 TEST_HEADER include/spdk/nvme_intel.h 00:05:23.501 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:23.501 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:23.501 TEST_HEADER include/spdk/nvme_spec.h 00:05:23.501 TEST_HEADER include/spdk/nvme_zns.h 00:05:23.501 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:23.501 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:23.501 TEST_HEADER include/spdk/nvmf.h 00:05:23.501 TEST_HEADER include/spdk/nvmf_spec.h 00:05:23.501 TEST_HEADER include/spdk/nvmf_transport.h 00:05:23.501 TEST_HEADER include/spdk/opal.h 00:05:23.501 TEST_HEADER include/spdk/opal_spec.h 00:05:23.501 TEST_HEADER include/spdk/pci_ids.h 00:05:23.501 TEST_HEADER include/spdk/pipe.h 00:05:23.501 TEST_HEADER include/spdk/queue.h 00:05:23.501 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:23.501 TEST_HEADER include/spdk/reduce.h 00:05:23.501 TEST_HEADER include/spdk/rpc.h 00:05:23.501 TEST_HEADER include/spdk/scheduler.h 00:05:23.501 TEST_HEADER include/spdk/scsi.h 00:05:23.501 TEST_HEADER include/spdk/scsi_spec.h 00:05:23.501 TEST_HEADER include/spdk/sock.h 00:05:23.501 TEST_HEADER include/spdk/stdinc.h 00:05:23.501 TEST_HEADER include/spdk/string.h 00:05:23.501 TEST_HEADER include/spdk/thread.h 00:05:23.501 TEST_HEADER include/spdk/trace.h 00:05:23.501 TEST_HEADER include/spdk/trace_parser.h 00:05:23.501 TEST_HEADER include/spdk/tree.h 00:05:23.501 TEST_HEADER include/spdk/ublk.h 00:05:23.501 TEST_HEADER include/spdk/util.h 00:05:23.501 TEST_HEADER include/spdk/uuid.h 00:05:23.501 TEST_HEADER include/spdk/version.h 00:05:23.501 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:23.501 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:23.501 TEST_HEADER include/spdk/vhost.h 00:05:23.501 TEST_HEADER include/spdk/vmd.h 00:05:23.501 TEST_HEADER include/spdk/xor.h 00:05:23.501 TEST_HEADER include/spdk/zipf.h 00:05:23.501 CXX test/cpp_headers/accel.o 00:05:23.501 LINK spdk_nvme 00:05:23.501 CXX test/cpp_headers/accel_module.o 00:05:23.759 LINK verify 00:05:23.759 LINK spdk_top 00:05:23.759 CXX test/cpp_headers/assert.o 00:05:23.759 CXX test/cpp_headers/barrier.o 00:05:23.759 CXX test/cpp_headers/base64.o 00:05:23.759 LINK spdk_bdev 00:05:24.017 CC test/app/histogram_perf/histogram_perf.o 00:05:24.017 CC test/app/jsoncat/jsoncat.o 00:05:24.017 CXX test/cpp_headers/bdev.o 00:05:24.276 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:24.276 CC examples/vmd/lsvmd/lsvmd.o 00:05:24.276 CC examples/idxd/perf/perf.o 00:05:24.276 CC app/vhost/vhost.o 00:05:24.276 LINK histogram_perf 00:05:24.276 LINK vhost_fuzz 00:05:24.276 LINK jsoncat 00:05:24.276 CC examples/thread/thread/thread_ex.o 00:05:24.534 CXX test/cpp_headers/bdev_module.o 00:05:24.534 LINK lsvmd 00:05:24.534 LINK interrupt_tgt 00:05:24.534 CXX test/cpp_headers/bdev_zone.o 00:05:24.534 CXX test/cpp_headers/bit_array.o 00:05:24.534 LINK vhost 00:05:24.792 CXX test/cpp_headers/bit_pool.o 00:05:24.792 LINK idxd_perf 00:05:24.792 CC examples/sock/hello_world/hello_sock.o 00:05:24.792 CXX test/cpp_headers/blob_bdev.o 00:05:24.792 CC examples/vmd/led/led.o 00:05:25.050 LINK thread 00:05:25.050 CXX test/cpp_headers/blobfs_bdev.o 00:05:25.050 CC test/app/stub/stub.o 00:05:25.309 LINK led 00:05:25.309 CC test/env/vtophys/vtophys.o 00:05:25.309 CC test/env/mem_callbacks/mem_callbacks.o 00:05:25.309 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:25.309 CXX test/cpp_headers/blobfs.o 00:05:25.309 CXX test/cpp_headers/blob.o 00:05:25.309 LINK hello_sock 00:05:25.309 LINK stub 00:05:25.309 CC test/env/memory/memory_ut.o 00:05:25.566 CXX test/cpp_headers/conf.o 00:05:25.566 LINK vtophys 00:05:25.566 LINK env_dpdk_post_init 00:05:25.566 CXX test/cpp_headers/config.o 00:05:25.566 CXX test/cpp_headers/cpuset.o 00:05:25.825 CXX test/cpp_headers/crc16.o 00:05:25.825 CC examples/accel/perf/accel_perf.o 00:05:25.825 CC examples/blob/hello_world/hello_blob.o 00:05:25.825 CC test/event/event_perf/event_perf.o 00:05:26.083 CC test/event/reactor/reactor.o 00:05:26.083 CC test/event/reactor_perf/reactor_perf.o 00:05:26.083 CXX test/cpp_headers/crc32.o 00:05:26.083 CC test/env/pci/pci_ut.o 00:05:26.341 LINK event_perf 00:05:26.341 LINK mem_callbacks 00:05:26.341 LINK reactor 00:05:26.341 LINK reactor_perf 00:05:26.341 LINK hello_blob 00:05:26.341 CXX test/cpp_headers/crc64.o 00:05:26.658 LINK iscsi_fuzz 00:05:26.658 CC test/rpc_client/rpc_client_test.o 00:05:26.658 CXX test/cpp_headers/dif.o 00:05:26.658 CC test/event/app_repeat/app_repeat.o 00:05:26.916 CC test/nvme/aer/aer.o 00:05:26.916 CC examples/blob/cli/blobcli.o 00:05:26.916 CC test/accel/dif/dif.o 00:05:26.916 CXX test/cpp_headers/dma.o 00:05:26.916 LINK accel_perf 00:05:26.916 LINK app_repeat 00:05:26.916 LINK rpc_client_test 00:05:27.173 LINK pci_ut 00:05:27.173 CC test/nvme/reset/reset.o 00:05:27.173 CXX test/cpp_headers/endian.o 00:05:27.431 LINK aer 00:05:27.431 CC test/nvme/sgl/sgl.o 00:05:27.431 CXX test/cpp_headers/env_dpdk.o 00:05:27.690 CC test/event/scheduler/scheduler.o 00:05:27.690 CC test/blobfs/mkfs/mkfs.o 00:05:27.690 LINK reset 00:05:27.690 LINK blobcli 00:05:27.690 CC examples/nvme/hello_world/hello_world.o 00:05:27.949 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:27.949 CXX test/cpp_headers/env.o 00:05:27.949 LINK memory_ut 00:05:27.949 LINK mkfs 00:05:27.949 LINK scheduler 00:05:27.949 LINK sgl 00:05:27.949 CC examples/nvme/reconnect/reconnect.o 00:05:28.268 CXX test/cpp_headers/event.o 00:05:28.268 LINK hello_world 00:05:28.268 CXX test/cpp_headers/fd_group.o 00:05:28.268 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:28.268 CXX test/cpp_headers/fd.o 00:05:28.268 CXX test/cpp_headers/file.o 00:05:28.527 LINK hello_fsdev 00:05:28.527 CC examples/nvme/arbitration/arbitration.o 00:05:28.527 CC test/nvme/e2edp/nvme_dp.o 00:05:28.527 CC test/nvme/overhead/overhead.o 00:05:28.527 LINK dif 00:05:28.527 CXX test/cpp_headers/fsdev.o 00:05:28.785 CXX test/cpp_headers/fsdev_module.o 00:05:28.785 CC examples/bdev/bdevperf/bdevperf.o 00:05:28.785 CC examples/bdev/hello_world/hello_bdev.o 00:05:28.785 LINK reconnect 00:05:29.043 LINK nvme_dp 00:05:29.043 CXX test/cpp_headers/ftl.o 00:05:29.043 LINK arbitration 00:05:29.043 LINK overhead 00:05:29.043 CC examples/nvme/hotplug/hotplug.o 00:05:29.302 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:29.302 LINK nvme_manage 00:05:29.302 CC test/nvme/err_injection/err_injection.o 00:05:29.302 LINK hello_bdev 00:05:29.302 CC test/lvol/esnap/esnap.o 00:05:29.302 CXX test/cpp_headers/fuse_dispatcher.o 00:05:29.302 CC examples/nvme/abort/abort.o 00:05:29.302 LINK hotplug 00:05:29.561 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:29.561 CXX test/cpp_headers/gpt_spec.o 00:05:29.561 LINK cmb_copy 00:05:29.561 CXX test/cpp_headers/hexlify.o 00:05:29.819 LINK err_injection 00:05:29.819 LINK pmr_persistence 00:05:29.819 CC test/nvme/startup/startup.o 00:05:29.819 CC test/bdev/bdevio/bdevio.o 00:05:29.819 CC test/nvme/reserve/reserve.o 00:05:30.078 CXX test/cpp_headers/histogram_data.o 00:05:30.078 CC test/nvme/simple_copy/simple_copy.o 00:05:30.078 CC test/nvme/connect_stress/connect_stress.o 00:05:30.078 LINK abort 00:05:30.078 CXX test/cpp_headers/idxd.o 00:05:30.078 LINK startup 00:05:30.336 LINK reserve 00:05:30.336 LINK bdevperf 00:05:30.336 CXX test/cpp_headers/idxd_spec.o 00:05:30.336 CXX test/cpp_headers/init.o 00:05:30.336 LINK connect_stress 00:05:30.336 CXX test/cpp_headers/ioat.o 00:05:30.594 LINK simple_copy 00:05:30.594 CC test/nvme/boot_partition/boot_partition.o 00:05:30.852 LINK bdevio 00:05:30.852 CXX test/cpp_headers/ioat_spec.o 00:05:30.852 LINK boot_partition 00:05:30.852 CC test/nvme/compliance/nvme_compliance.o 00:05:30.852 CC test/nvme/fused_ordering/fused_ordering.o 00:05:30.852 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:30.852 CC test/nvme/fdp/fdp.o 00:05:30.852 CC test/nvme/cuse/cuse.o 00:05:30.852 CXX test/cpp_headers/iscsi_spec.o 00:05:31.111 CXX test/cpp_headers/json.o 00:05:31.111 CXX test/cpp_headers/jsonrpc.o 00:05:31.111 CC examples/nvmf/nvmf/nvmf.o 00:05:31.111 CXX test/cpp_headers/keyring.o 00:05:31.368 LINK doorbell_aers 00:05:31.369 LINK fused_ordering 00:05:31.369 CXX test/cpp_headers/keyring_module.o 00:05:31.369 CXX test/cpp_headers/likely.o 00:05:31.626 CXX test/cpp_headers/log.o 00:05:31.627 LINK nvme_compliance 00:05:31.627 CXX test/cpp_headers/lvol.o 00:05:31.627 CXX test/cpp_headers/md5.o 00:05:31.627 LINK fdp 00:05:31.627 CXX test/cpp_headers/memory.o 00:05:31.627 CXX test/cpp_headers/mmio.o 00:05:31.627 LINK nvmf 00:05:31.885 CXX test/cpp_headers/nbd.o 00:05:31.885 CXX test/cpp_headers/net.o 00:05:31.885 CXX test/cpp_headers/notify.o 00:05:31.885 CXX test/cpp_headers/nvme.o 00:05:31.885 CXX test/cpp_headers/nvme_intel.o 00:05:31.885 CXX test/cpp_headers/nvme_ocssd.o 00:05:31.885 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:32.144 CXX test/cpp_headers/nvme_spec.o 00:05:32.144 CXX test/cpp_headers/nvme_zns.o 00:05:32.144 CXX test/cpp_headers/nvmf_cmd.o 00:05:32.144 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:32.144 CXX test/cpp_headers/nvmf.o 00:05:32.144 CXX test/cpp_headers/nvmf_spec.o 00:05:32.403 CXX test/cpp_headers/nvmf_transport.o 00:05:32.403 CXX test/cpp_headers/opal.o 00:05:32.403 CXX test/cpp_headers/opal_spec.o 00:05:32.403 CXX test/cpp_headers/pci_ids.o 00:05:32.403 CXX test/cpp_headers/pipe.o 00:05:32.403 CXX test/cpp_headers/queue.o 00:05:32.403 CXX test/cpp_headers/reduce.o 00:05:32.661 CXX test/cpp_headers/rpc.o 00:05:32.661 CXX test/cpp_headers/scheduler.o 00:05:32.661 CXX test/cpp_headers/scsi.o 00:05:32.661 CXX test/cpp_headers/scsi_spec.o 00:05:32.661 CXX test/cpp_headers/sock.o 00:05:32.661 CXX test/cpp_headers/stdinc.o 00:05:32.661 CXX test/cpp_headers/string.o 00:05:32.920 CXX test/cpp_headers/thread.o 00:05:32.920 CXX test/cpp_headers/trace.o 00:05:32.920 CXX test/cpp_headers/trace_parser.o 00:05:32.920 CXX test/cpp_headers/tree.o 00:05:32.920 CXX test/cpp_headers/ublk.o 00:05:32.920 LINK cuse 00:05:32.920 CXX test/cpp_headers/util.o 00:05:33.178 CXX test/cpp_headers/uuid.o 00:05:33.178 CXX test/cpp_headers/version.o 00:05:33.178 CXX test/cpp_headers/vfio_user_pci.o 00:05:33.178 CXX test/cpp_headers/vfio_user_spec.o 00:05:33.178 CXX test/cpp_headers/vhost.o 00:05:33.178 CXX test/cpp_headers/vmd.o 00:05:33.178 CXX test/cpp_headers/xor.o 00:05:33.178 CXX test/cpp_headers/zipf.o 00:05:39.738 LINK esnap 00:05:39.738 00:05:39.738 real 2m9.000s 00:05:39.738 user 13m2.967s 00:05:39.738 sys 2m10.490s 00:05:39.738 ************************************ 00:05:39.738 END TEST make 00:05:39.738 ************************************ 00:05:39.738 10:34:10 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:39.738 10:34:10 make -- common/autotest_common.sh@10 -- $ set +x 00:05:39.738 10:34:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:39.738 10:34:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:39.738 10:34:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:39.738 10:34:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.738 10:34:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:39.738 10:34:10 -- pm/common@44 -- $ pid=5297 00:05:39.738 10:34:10 -- pm/common@50 -- $ kill -TERM 5297 00:05:39.738 10:34:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.738 10:34:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:39.738 10:34:10 -- pm/common@44 -- $ pid=5298 00:05:39.738 10:34:10 -- pm/common@50 -- $ kill -TERM 5298 00:05:39.738 10:34:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:39.738 10:34:10 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:39.738 10:34:10 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:39.738 10:34:10 -- common/autotest_common.sh@1691 -- # lcov --version 00:05:39.738 10:34:10 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:39.738 10:34:10 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:39.738 10:34:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.738 10:34:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.738 10:34:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.738 10:34:10 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.738 10:34:10 -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.738 10:34:10 -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.738 10:34:10 -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.738 10:34:10 -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.738 10:34:10 -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.738 10:34:10 -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.738 10:34:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.738 10:34:10 -- scripts/common.sh@344 -- # case "$op" in 00:05:39.738 10:34:10 -- scripts/common.sh@345 -- # : 1 00:05:39.738 10:34:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.738 10:34:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.738 10:34:10 -- scripts/common.sh@365 -- # decimal 1 00:05:39.738 10:34:10 -- scripts/common.sh@353 -- # local d=1 00:05:39.738 10:34:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.738 10:34:10 -- scripts/common.sh@355 -- # echo 1 00:05:39.738 10:34:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.738 10:34:10 -- scripts/common.sh@366 -- # decimal 2 00:05:39.738 10:34:10 -- scripts/common.sh@353 -- # local d=2 00:05:39.738 10:34:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.738 10:34:10 -- scripts/common.sh@355 -- # echo 2 00:05:39.738 10:34:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.738 10:34:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.738 10:34:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.738 10:34:10 -- scripts/common.sh@368 -- # return 0 00:05:39.738 10:34:10 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.738 10:34:10 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:39.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.738 --rc genhtml_branch_coverage=1 00:05:39.738 --rc genhtml_function_coverage=1 00:05:39.738 --rc genhtml_legend=1 00:05:39.739 --rc geninfo_all_blocks=1 00:05:39.739 --rc geninfo_unexecuted_blocks=1 00:05:39.739 00:05:39.739 ' 00:05:39.739 10:34:10 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:39.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.739 --rc genhtml_branch_coverage=1 00:05:39.739 --rc genhtml_function_coverage=1 00:05:39.739 --rc genhtml_legend=1 00:05:39.739 --rc geninfo_all_blocks=1 00:05:39.739 --rc geninfo_unexecuted_blocks=1 00:05:39.739 00:05:39.739 ' 00:05:39.739 10:34:10 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:39.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.739 --rc genhtml_branch_coverage=1 00:05:39.739 --rc genhtml_function_coverage=1 00:05:39.739 --rc genhtml_legend=1 00:05:39.739 --rc geninfo_all_blocks=1 00:05:39.739 --rc geninfo_unexecuted_blocks=1 00:05:39.739 00:05:39.739 ' 00:05:39.739 10:34:10 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:39.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.739 --rc genhtml_branch_coverage=1 00:05:39.739 --rc genhtml_function_coverage=1 00:05:39.739 --rc genhtml_legend=1 00:05:39.739 --rc geninfo_all_blocks=1 00:05:39.739 --rc geninfo_unexecuted_blocks=1 00:05:39.739 00:05:39.739 ' 00:05:39.739 10:34:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:39.739 10:34:10 -- nvmf/common.sh@7 -- # uname -s 00:05:39.739 10:34:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.739 10:34:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.739 10:34:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.739 10:34:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.739 10:34:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.739 10:34:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.739 10:34:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.739 10:34:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.739 10:34:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.739 10:34:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.739 10:34:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:762637b5-3988-4abf-ad8d-a7e0f3892a8d 00:05:39.739 10:34:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=762637b5-3988-4abf-ad8d-a7e0f3892a8d 00:05:39.739 10:34:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.739 10:34:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.739 10:34:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.739 10:34:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.739 10:34:10 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:39.739 10:34:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.739 10:34:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.739 10:34:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.739 10:34:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.739 10:34:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.739 10:34:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.739 10:34:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.739 10:34:10 -- paths/export.sh@5 -- # export PATH 00:05:39.739 10:34:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.739 10:34:10 -- nvmf/common.sh@51 -- # : 0 00:05:39.739 10:34:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:39.739 10:34:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:39.739 10:34:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.739 10:34:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.739 10:34:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.739 10:34:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:39.739 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:39.739 10:34:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:39.739 10:34:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:39.739 10:34:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:39.739 10:34:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:39.739 10:34:10 -- spdk/autotest.sh@32 -- # uname -s 00:05:39.739 10:34:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:39.739 10:34:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:39.739 10:34:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:39.739 10:34:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:39.739 10:34:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:39.739 10:34:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:39.739 10:34:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:39.739 10:34:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:39.739 10:34:10 -- spdk/autotest.sh@48 -- # udevadm_pid=54659 00:05:39.739 10:34:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:39.739 10:34:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:39.739 10:34:10 -- pm/common@17 -- # local monitor 00:05:39.739 10:34:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.739 10:34:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.739 10:34:10 -- pm/common@21 -- # date +%s 00:05:39.739 10:34:10 -- pm/common@25 -- # sleep 1 00:05:39.739 10:34:10 -- pm/common@21 -- # date +%s 00:05:39.739 10:34:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731666850 00:05:39.739 10:34:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731666850 00:05:39.998 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731666850_collect-cpu-load.pm.log 00:05:39.998 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731666850_collect-vmstat.pm.log 00:05:40.940 10:34:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:40.940 10:34:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:40.940 10:34:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.940 10:34:11 -- common/autotest_common.sh@10 -- # set +x 00:05:40.940 10:34:11 -- spdk/autotest.sh@59 -- # create_test_list 00:05:40.940 10:34:11 -- common/autotest_common.sh@750 -- # xtrace_disable 00:05:40.940 10:34:11 -- common/autotest_common.sh@10 -- # set +x 00:05:40.940 10:34:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:40.940 10:34:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:40.940 10:34:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:40.940 10:34:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:40.940 10:34:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:40.940 10:34:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:40.940 10:34:11 -- common/autotest_common.sh@1455 -- # uname 00:05:40.940 10:34:11 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:40.940 10:34:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:40.940 10:34:11 -- common/autotest_common.sh@1475 -- # uname 00:05:40.940 10:34:11 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:40.940 10:34:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:40.940 10:34:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:40.940 lcov: LCOV version 1.15 00:05:40.940 10:34:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:02.860 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:02.860 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:20.941 10:34:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:20.941 10:34:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.941 10:34:49 -- common/autotest_common.sh@10 -- # set +x 00:06:20.941 10:34:49 -- spdk/autotest.sh@78 -- # rm -f 00:06:20.941 10:34:49 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:20.941 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.941 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:20.941 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:20.941 10:34:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:20.941 10:34:49 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:20.941 10:34:49 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:20.941 10:34:49 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:20.941 10:34:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:20.941 10:34:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:20.941 10:34:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:20.941 10:34:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:20.941 10:34:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:20.941 10:34:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:20.941 10:34:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:20.941 10:34:49 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:20.941 10:34:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:20.941 10:34:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:20.941 10:34:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:20.941 10:34:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:20.941 10:34:49 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:20.941 10:34:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:20.941 10:34:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:20.941 10:34:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:20.941 10:34:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:20.941 10:34:49 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:20.941 10:34:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:20.941 10:34:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:20.941 10:34:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:20.941 10:34:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:20.941 10:34:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:20.941 10:34:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:20.941 10:34:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:20.941 10:34:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:20.941 No valid GPT data, bailing 00:06:20.941 10:34:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:20.941 10:34:49 -- scripts/common.sh@394 -- # pt= 00:06:20.941 10:34:49 -- scripts/common.sh@395 -- # return 1 00:06:20.941 10:34:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:20.941 1+0 records in 00:06:20.941 1+0 records out 00:06:20.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00388031 s, 270 MB/s 00:06:20.941 10:34:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:20.941 10:34:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:20.941 10:34:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:20.941 10:34:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:20.941 10:34:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:20.941 No valid GPT data, bailing 00:06:20.941 10:34:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:20.941 10:34:50 -- scripts/common.sh@394 -- # pt= 00:06:20.941 10:34:50 -- scripts/common.sh@395 -- # return 1 00:06:20.941 10:34:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:20.941 1+0 records in 00:06:20.941 1+0 records out 00:06:20.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452942 s, 232 MB/s 00:06:20.941 10:34:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:20.941 10:34:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:20.941 10:34:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:20.941 10:34:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:20.941 10:34:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:20.941 No valid GPT data, bailing 00:06:20.941 10:34:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:20.941 10:34:50 -- scripts/common.sh@394 -- # pt= 00:06:20.941 10:34:50 -- scripts/common.sh@395 -- # return 1 00:06:20.941 10:34:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:20.941 1+0 records in 00:06:20.941 1+0 records out 00:06:20.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00373959 s, 280 MB/s 00:06:20.941 10:34:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:20.941 10:34:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:20.941 10:34:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:20.941 10:34:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:20.941 10:34:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:20.941 No valid GPT data, bailing 00:06:20.941 10:34:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:20.941 10:34:50 -- scripts/common.sh@394 -- # pt= 00:06:20.941 10:34:50 -- scripts/common.sh@395 -- # return 1 00:06:20.941 10:34:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:20.941 1+0 records in 00:06:20.941 1+0 records out 00:06:20.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452366 s, 232 MB/s 00:06:20.941 10:34:50 -- spdk/autotest.sh@105 -- # sync 00:06:20.941 10:34:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:20.941 10:34:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:20.941 10:34:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:21.875 10:34:52 -- spdk/autotest.sh@111 -- # uname -s 00:06:21.875 10:34:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:21.875 10:34:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:21.875 10:34:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:22.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.133 Hugepages 00:06:22.133 node hugesize free / total 00:06:22.133 node0 1048576kB 0 / 0 00:06:22.133 node0 2048kB 0 / 0 00:06:22.133 00:06:22.133 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:22.391 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:22.391 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:22.391 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:22.391 10:34:52 -- spdk/autotest.sh@117 -- # uname -s 00:06:22.391 10:34:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:22.391 10:34:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:22.391 10:34:52 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:22.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:23.217 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:23.217 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:23.217 10:34:53 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:24.591 10:34:54 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:24.591 10:34:54 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:24.591 10:34:54 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:24.591 10:34:54 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:24.591 10:34:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:24.591 10:34:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:24.591 10:34:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:24.591 10:34:54 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:24.591 10:34:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:24.591 10:34:54 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:24.591 10:34:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:24.591 10:34:54 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:24.591 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:24.591 Waiting for block devices as requested 00:06:24.591 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:24.850 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:24.850 10:34:55 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:24.850 10:34:55 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:24.850 10:34:55 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:24.850 10:34:55 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:24.850 10:34:55 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:24.850 10:34:55 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:24.850 10:34:55 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:24.850 10:34:55 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:24.850 10:34:55 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:24.850 10:34:55 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:24.850 10:34:55 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:24.850 10:34:55 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:24.850 10:34:55 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:24.850 10:34:55 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:24.850 10:34:55 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:24.850 10:34:55 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:24.850 10:34:55 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:24.850 10:34:55 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:24.850 10:34:55 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:24.850 10:34:55 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:24.850 10:34:55 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:24.850 10:34:55 -- common/autotest_common.sh@1541 -- # continue 00:06:24.850 10:34:55 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:24.850 10:34:55 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:24.850 10:34:55 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:24.850 10:34:55 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:24.850 10:34:55 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:24.850 10:34:55 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:24.850 10:34:55 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:24.850 10:34:55 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:24.850 10:34:55 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:24.850 10:34:55 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:24.850 10:34:55 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:24.850 10:34:55 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:24.850 10:34:55 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:24.850 10:34:55 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:24.850 10:34:55 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:24.850 10:34:55 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:24.850 10:34:55 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:24.850 10:34:55 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:24.850 10:34:55 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:24.850 10:34:55 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:24.850 10:34:55 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:24.850 10:34:55 -- common/autotest_common.sh@1541 -- # continue 00:06:24.850 10:34:55 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:24.850 10:34:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.850 10:34:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.108 10:34:55 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:25.108 10:34:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.108 10:34:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.108 10:34:55 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:25.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:25.673 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:25.674 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:25.674 10:34:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:25.674 10:34:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.674 10:34:56 -- common/autotest_common.sh@10 -- # set +x 00:06:25.931 10:34:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:25.931 10:34:56 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:25.931 10:34:56 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:25.931 10:34:56 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:25.931 10:34:56 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:25.931 10:34:56 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:25.931 10:34:56 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:25.931 10:34:56 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:25.931 10:34:56 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:25.931 10:34:56 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:25.932 10:34:56 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:25.932 10:34:56 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:25.932 10:34:56 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:25.932 10:34:56 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:25.932 10:34:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:25.932 10:34:56 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:25.932 10:34:56 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:25.932 10:34:56 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:25.932 10:34:56 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:25.932 10:34:56 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:25.932 10:34:56 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:25.932 10:34:56 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:25.932 10:34:56 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:25.932 10:34:56 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:25.932 10:34:56 -- common/autotest_common.sh@1570 -- # return 0 00:06:25.932 10:34:56 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:25.932 10:34:56 -- common/autotest_common.sh@1578 -- # return 0 00:06:25.932 10:34:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:25.932 10:34:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:25.932 10:34:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:25.932 10:34:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:25.932 10:34:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:25.932 10:34:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.932 10:34:56 -- common/autotest_common.sh@10 -- # set +x 00:06:25.932 10:34:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:25.932 10:34:56 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:25.932 10:34:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.932 10:34:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.932 10:34:56 -- common/autotest_common.sh@10 -- # set +x 00:06:25.932 ************************************ 00:06:25.932 START TEST env 00:06:25.932 ************************************ 00:06:25.932 10:34:56 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:25.932 * Looking for test storage... 00:06:25.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:25.932 10:34:56 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:25.932 10:34:56 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:25.932 10:34:56 env -- common/autotest_common.sh@1691 -- # lcov --version 00:06:25.932 10:34:56 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:25.932 10:34:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.932 10:34:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.932 10:34:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.932 10:34:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.932 10:34:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.932 10:34:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.932 10:34:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.190 10:34:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.190 10:34:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.190 10:34:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.190 10:34:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.190 10:34:56 env -- scripts/common.sh@344 -- # case "$op" in 00:06:26.190 10:34:56 env -- scripts/common.sh@345 -- # : 1 00:06:26.190 10:34:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.190 10:34:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.190 10:34:56 env -- scripts/common.sh@365 -- # decimal 1 00:06:26.190 10:34:56 env -- scripts/common.sh@353 -- # local d=1 00:06:26.190 10:34:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.190 10:34:56 env -- scripts/common.sh@355 -- # echo 1 00:06:26.190 10:34:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.190 10:34:56 env -- scripts/common.sh@366 -- # decimal 2 00:06:26.190 10:34:56 env -- scripts/common.sh@353 -- # local d=2 00:06:26.190 10:34:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.190 10:34:56 env -- scripts/common.sh@355 -- # echo 2 00:06:26.191 10:34:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.191 10:34:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.191 10:34:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.191 10:34:56 env -- scripts/common.sh@368 -- # return 0 00:06:26.191 10:34:56 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.191 10:34:56 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:26.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.191 --rc genhtml_branch_coverage=1 00:06:26.191 --rc genhtml_function_coverage=1 00:06:26.191 --rc genhtml_legend=1 00:06:26.191 --rc geninfo_all_blocks=1 00:06:26.191 --rc geninfo_unexecuted_blocks=1 00:06:26.191 00:06:26.191 ' 00:06:26.191 10:34:56 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:26.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.191 --rc genhtml_branch_coverage=1 00:06:26.191 --rc genhtml_function_coverage=1 00:06:26.191 --rc genhtml_legend=1 00:06:26.191 --rc geninfo_all_blocks=1 00:06:26.191 --rc geninfo_unexecuted_blocks=1 00:06:26.191 00:06:26.191 ' 00:06:26.191 10:34:56 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:26.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.191 --rc genhtml_branch_coverage=1 00:06:26.191 --rc genhtml_function_coverage=1 00:06:26.191 --rc genhtml_legend=1 00:06:26.191 --rc geninfo_all_blocks=1 00:06:26.191 --rc geninfo_unexecuted_blocks=1 00:06:26.191 00:06:26.191 ' 00:06:26.191 10:34:56 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:26.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.191 --rc genhtml_branch_coverage=1 00:06:26.191 --rc genhtml_function_coverage=1 00:06:26.191 --rc genhtml_legend=1 00:06:26.191 --rc geninfo_all_blocks=1 00:06:26.191 --rc geninfo_unexecuted_blocks=1 00:06:26.191 00:06:26.191 ' 00:06:26.191 10:34:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:26.191 10:34:56 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.191 10:34:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.191 10:34:56 env -- common/autotest_common.sh@10 -- # set +x 00:06:26.191 ************************************ 00:06:26.191 START TEST env_memory 00:06:26.191 ************************************ 00:06:26.191 10:34:56 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:26.191 00:06:26.191 00:06:26.191 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.191 http://cunit.sourceforge.net/ 00:06:26.191 00:06:26.191 00:06:26.191 Suite: memory 00:06:26.191 Test: alloc and free memory map ...[2024-11-15 10:34:56.581000] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:26.191 passed 00:06:26.191 Test: mem map translation ...[2024-11-15 10:34:56.642915] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:26.191 [2024-11-15 10:34:56.643021] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:26.191 [2024-11-15 10:34:56.643132] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:26.191 [2024-11-15 10:34:56.643188] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:26.191 passed 00:06:26.191 Test: mem map registration ...[2024-11-15 10:34:56.736932] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:26.191 [2024-11-15 10:34:56.737042] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:26.450 passed 00:06:26.450 Test: mem map adjacent registrations ...passed 00:06:26.450 00:06:26.450 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.450 suites 1 1 n/a 0 0 00:06:26.450 tests 4 4 4 0 0 00:06:26.450 asserts 152 152 152 0 n/a 00:06:26.450 00:06:26.450 Elapsed time = 0.317 seconds 00:06:26.450 00:06:26.450 real 0m0.351s 00:06:26.450 user 0m0.319s 00:06:26.450 sys 0m0.025s 00:06:26.450 10:34:56 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.450 10:34:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:26.450 ************************************ 00:06:26.450 END TEST env_memory 00:06:26.450 ************************************ 00:06:26.450 10:34:56 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:26.450 10:34:56 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.450 10:34:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.450 10:34:56 env -- common/autotest_common.sh@10 -- # set +x 00:06:26.450 ************************************ 00:06:26.450 START TEST env_vtophys 00:06:26.450 ************************************ 00:06:26.450 10:34:56 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:26.450 EAL: lib.eal log level changed from notice to debug 00:06:26.450 EAL: Detected lcore 0 as core 0 on socket 0 00:06:26.450 EAL: Detected lcore 1 as core 0 on socket 0 00:06:26.450 EAL: Detected lcore 2 as core 0 on socket 0 00:06:26.450 EAL: Detected lcore 3 as core 0 on socket 0 00:06:26.450 EAL: Detected lcore 4 as core 0 on socket 0 00:06:26.450 EAL: Detected lcore 5 as core 0 on socket 0 00:06:26.450 EAL: Detected lcore 6 as core 0 on socket 0 00:06:26.450 EAL: Detected lcore 7 as core 0 on socket 0 00:06:26.450 EAL: Detected lcore 8 as core 0 on socket 0 00:06:26.450 EAL: Detected lcore 9 as core 0 on socket 0 00:06:26.450 EAL: Maximum logical cores by configuration: 128 00:06:26.450 EAL: Detected CPU lcores: 10 00:06:26.450 EAL: Detected NUMA nodes: 1 00:06:26.450 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:26.450 EAL: Detected shared linkage of DPDK 00:06:26.450 EAL: No shared files mode enabled, IPC will be disabled 00:06:26.450 EAL: Selected IOVA mode 'PA' 00:06:26.450 EAL: Probing VFIO support... 00:06:26.450 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:26.450 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:26.450 EAL: Ask a virtual area of 0x2e000 bytes 00:06:26.450 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:26.450 EAL: Setting up physically contiguous memory... 00:06:26.450 EAL: Setting maximum number of open files to 524288 00:06:26.450 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:26.450 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:26.450 EAL: Ask a virtual area of 0x61000 bytes 00:06:26.450 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:26.450 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:26.450 EAL: Ask a virtual area of 0x400000000 bytes 00:06:26.450 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:26.450 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:26.450 EAL: Ask a virtual area of 0x61000 bytes 00:06:26.450 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:26.450 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:26.450 EAL: Ask a virtual area of 0x400000000 bytes 00:06:26.450 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:26.450 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:26.450 EAL: Ask a virtual area of 0x61000 bytes 00:06:26.450 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:26.450 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:26.450 EAL: Ask a virtual area of 0x400000000 bytes 00:06:26.450 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:26.450 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:26.450 EAL: Ask a virtual area of 0x61000 bytes 00:06:26.450 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:26.450 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:26.450 EAL: Ask a virtual area of 0x400000000 bytes 00:06:26.450 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:26.450 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:26.450 EAL: Hugepages will be freed exactly as allocated. 00:06:26.450 EAL: No shared files mode enabled, IPC is disabled 00:06:26.450 EAL: No shared files mode enabled, IPC is disabled 00:06:26.709 EAL: TSC frequency is ~2200000 KHz 00:06:26.709 EAL: Main lcore 0 is ready (tid=7fd0796a1a40;cpuset=[0]) 00:06:26.709 EAL: Trying to obtain current memory policy. 00:06:26.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.709 EAL: Restoring previous memory policy: 0 00:06:26.709 EAL: request: mp_malloc_sync 00:06:26.709 EAL: No shared files mode enabled, IPC is disabled 00:06:26.709 EAL: Heap on socket 0 was expanded by 2MB 00:06:26.709 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:26.709 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:26.709 EAL: Mem event callback 'spdk:(nil)' registered 00:06:26.709 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:26.709 00:06:26.709 00:06:26.709 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.709 http://cunit.sourceforge.net/ 00:06:26.709 00:06:26.709 00:06:26.709 Suite: components_suite 00:06:27.276 Test: vtophys_malloc_test ...passed 00:06:27.276 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:27.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.276 EAL: Restoring previous memory policy: 4 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was expanded by 4MB 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was shrunk by 4MB 00:06:27.276 EAL: Trying to obtain current memory policy. 00:06:27.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.276 EAL: Restoring previous memory policy: 4 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was expanded by 6MB 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was shrunk by 6MB 00:06:27.276 EAL: Trying to obtain current memory policy. 00:06:27.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.276 EAL: Restoring previous memory policy: 4 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was expanded by 10MB 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was shrunk by 10MB 00:06:27.276 EAL: Trying to obtain current memory policy. 00:06:27.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.276 EAL: Restoring previous memory policy: 4 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was expanded by 18MB 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was shrunk by 18MB 00:06:27.276 EAL: Trying to obtain current memory policy. 00:06:27.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.276 EAL: Restoring previous memory policy: 4 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was expanded by 34MB 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was shrunk by 34MB 00:06:27.276 EAL: Trying to obtain current memory policy. 00:06:27.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.276 EAL: Restoring previous memory policy: 4 00:06:27.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.276 EAL: request: mp_malloc_sync 00:06:27.276 EAL: No shared files mode enabled, IPC is disabled 00:06:27.276 EAL: Heap on socket 0 was expanded by 66MB 00:06:27.534 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.534 EAL: request: mp_malloc_sync 00:06:27.534 EAL: No shared files mode enabled, IPC is disabled 00:06:27.534 EAL: Heap on socket 0 was shrunk by 66MB 00:06:27.534 EAL: Trying to obtain current memory policy. 00:06:27.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.534 EAL: Restoring previous memory policy: 4 00:06:27.534 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.534 EAL: request: mp_malloc_sync 00:06:27.534 EAL: No shared files mode enabled, IPC is disabled 00:06:27.534 EAL: Heap on socket 0 was expanded by 130MB 00:06:27.792 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.792 EAL: request: mp_malloc_sync 00:06:27.792 EAL: No shared files mode enabled, IPC is disabled 00:06:27.792 EAL: Heap on socket 0 was shrunk by 130MB 00:06:28.051 EAL: Trying to obtain current memory policy. 00:06:28.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:28.051 EAL: Restoring previous memory policy: 4 00:06:28.051 EAL: Calling mem event callback 'spdk:(nil)' 00:06:28.051 EAL: request: mp_malloc_sync 00:06:28.051 EAL: No shared files mode enabled, IPC is disabled 00:06:28.051 EAL: Heap on socket 0 was expanded by 258MB 00:06:28.309 EAL: Calling mem event callback 'spdk:(nil)' 00:06:28.309 EAL: request: mp_malloc_sync 00:06:28.309 EAL: No shared files mode enabled, IPC is disabled 00:06:28.309 EAL: Heap on socket 0 was shrunk by 258MB 00:06:28.877 EAL: Trying to obtain current memory policy. 00:06:28.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:28.877 EAL: Restoring previous memory policy: 4 00:06:28.877 EAL: Calling mem event callback 'spdk:(nil)' 00:06:28.877 EAL: request: mp_malloc_sync 00:06:28.877 EAL: No shared files mode enabled, IPC is disabled 00:06:28.877 EAL: Heap on socket 0 was expanded by 514MB 00:06:29.810 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.810 EAL: request: mp_malloc_sync 00:06:29.810 EAL: No shared files mode enabled, IPC is disabled 00:06:29.810 EAL: Heap on socket 0 was shrunk by 514MB 00:06:30.437 EAL: Trying to obtain current memory policy. 00:06:30.437 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.437 EAL: Restoring previous memory policy: 4 00:06:30.437 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.437 EAL: request: mp_malloc_sync 00:06:30.437 EAL: No shared files mode enabled, IPC is disabled 00:06:30.437 EAL: Heap on socket 0 was expanded by 1026MB 00:06:32.355 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.355 EAL: request: mp_malloc_sync 00:06:32.355 EAL: No shared files mode enabled, IPC is disabled 00:06:32.355 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:33.733 passed 00:06:33.733 00:06:33.733 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.733 suites 1 1 n/a 0 0 00:06:33.733 tests 2 2 2 0 0 00:06:33.733 asserts 5327 5327 5327 0 n/a 00:06:33.733 00:06:33.733 Elapsed time = 6.860 seconds 00:06:33.733 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.733 EAL: request: mp_malloc_sync 00:06:33.733 EAL: No shared files mode enabled, IPC is disabled 00:06:33.733 EAL: Heap on socket 0 was shrunk by 2MB 00:06:33.733 EAL: No shared files mode enabled, IPC is disabled 00:06:33.733 EAL: No shared files mode enabled, IPC is disabled 00:06:33.733 EAL: No shared files mode enabled, IPC is disabled 00:06:33.733 00:06:33.733 real 0m7.215s 00:06:33.733 user 0m6.379s 00:06:33.733 sys 0m0.671s 00:06:33.733 10:35:04 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.733 10:35:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:33.733 ************************************ 00:06:33.733 END TEST env_vtophys 00:06:33.733 ************************************ 00:06:33.733 10:35:04 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:33.733 10:35:04 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.733 10:35:04 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.733 10:35:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:33.733 ************************************ 00:06:33.733 START TEST env_pci 00:06:33.733 ************************************ 00:06:33.733 10:35:04 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:33.733 00:06:33.733 00:06:33.733 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.733 http://cunit.sourceforge.net/ 00:06:33.733 00:06:33.733 00:06:33.733 Suite: pci 00:06:33.733 Test: pci_hook ...[2024-11-15 10:35:04.202386] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57028 has claimed it 00:06:33.733 passed 00:06:33.733 00:06:33.733 EAL: Cannot find device (10000:00:01.0) 00:06:33.733 EAL: Failed to attach device on primary process 00:06:33.733 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.733 suites 1 1 n/a 0 0 00:06:33.733 tests 1 1 1 0 0 00:06:33.733 asserts 25 25 25 0 n/a 00:06:33.733 00:06:33.733 Elapsed time = 0.007 seconds 00:06:33.733 00:06:33.733 real 0m0.074s 00:06:33.733 user 0m0.032s 00:06:33.733 sys 0m0.042s 00:06:33.734 10:35:04 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.734 ************************************ 00:06:33.734 END TEST env_pci 00:06:33.734 ************************************ 00:06:33.734 10:35:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:33.734 10:35:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:33.734 10:35:04 env -- env/env.sh@15 -- # uname 00:06:33.734 10:35:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:33.734 10:35:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:33.734 10:35:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:33.734 10:35:04 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:33.734 10:35:04 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.734 10:35:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:33.734 ************************************ 00:06:33.734 START TEST env_dpdk_post_init 00:06:33.734 ************************************ 00:06:33.992 10:35:04 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:33.992 EAL: Detected CPU lcores: 10 00:06:33.992 EAL: Detected NUMA nodes: 1 00:06:33.992 EAL: Detected shared linkage of DPDK 00:06:33.992 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:33.992 EAL: Selected IOVA mode 'PA' 00:06:33.992 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:33.992 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:33.992 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:34.250 Starting DPDK initialization... 00:06:34.250 Starting SPDK post initialization... 00:06:34.250 SPDK NVMe probe 00:06:34.250 Attaching to 0000:00:10.0 00:06:34.250 Attaching to 0000:00:11.0 00:06:34.250 Attached to 0000:00:10.0 00:06:34.250 Attached to 0000:00:11.0 00:06:34.250 Cleaning up... 00:06:34.250 00:06:34.250 real 0m0.305s 00:06:34.250 user 0m0.110s 00:06:34.250 sys 0m0.094s 00:06:34.250 10:35:04 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.250 ************************************ 00:06:34.250 END TEST env_dpdk_post_init 00:06:34.250 ************************************ 00:06:34.250 10:35:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:34.250 10:35:04 env -- env/env.sh@26 -- # uname 00:06:34.250 10:35:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:34.250 10:35:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:34.250 10:35:04 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:34.250 10:35:04 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.250 10:35:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.250 ************************************ 00:06:34.250 START TEST env_mem_callbacks 00:06:34.250 ************************************ 00:06:34.250 10:35:04 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:34.250 EAL: Detected CPU lcores: 10 00:06:34.250 EAL: Detected NUMA nodes: 1 00:06:34.250 EAL: Detected shared linkage of DPDK 00:06:34.250 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:34.250 EAL: Selected IOVA mode 'PA' 00:06:34.509 00:06:34.509 00:06:34.509 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.509 http://cunit.sourceforge.net/ 00:06:34.509 00:06:34.509 00:06:34.509 Suite: memory 00:06:34.509 Test: test ... 00:06:34.509 register 0x200000200000 2097152 00:06:34.509 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:34.509 malloc 3145728 00:06:34.509 register 0x200000400000 4194304 00:06:34.509 buf 0x2000004fffc0 len 3145728 PASSED 00:06:34.509 malloc 64 00:06:34.509 buf 0x2000004ffec0 len 64 PASSED 00:06:34.509 malloc 4194304 00:06:34.509 register 0x200000800000 6291456 00:06:34.509 buf 0x2000009fffc0 len 4194304 PASSED 00:06:34.509 free 0x2000004fffc0 3145728 00:06:34.509 free 0x2000004ffec0 64 00:06:34.509 unregister 0x200000400000 4194304 PASSED 00:06:34.509 free 0x2000009fffc0 4194304 00:06:34.509 unregister 0x200000800000 6291456 PASSED 00:06:34.509 malloc 8388608 00:06:34.509 register 0x200000400000 10485760 00:06:34.509 buf 0x2000005fffc0 len 8388608 PASSED 00:06:34.509 free 0x2000005fffc0 8388608 00:06:34.509 unregister 0x200000400000 10485760 PASSED 00:06:34.509 passed 00:06:34.509 00:06:34.509 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.509 suites 1 1 n/a 0 0 00:06:34.509 tests 1 1 1 0 0 00:06:34.509 asserts 15 15 15 0 n/a 00:06:34.509 00:06:34.509 Elapsed time = 0.078 seconds 00:06:34.509 00:06:34.509 real 0m0.273s 00:06:34.509 user 0m0.106s 00:06:34.509 sys 0m0.065s 00:06:34.509 10:35:04 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.509 ************************************ 00:06:34.509 END TEST env_mem_callbacks 00:06:34.509 ************************************ 00:06:34.509 10:35:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:34.509 00:06:34.509 real 0m8.626s 00:06:34.509 user 0m7.119s 00:06:34.509 sys 0m1.124s 00:06:34.509 10:35:04 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.509 10:35:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.509 ************************************ 00:06:34.509 END TEST env 00:06:34.509 ************************************ 00:06:34.509 10:35:04 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:34.509 10:35:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:34.509 10:35:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.509 10:35:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.509 ************************************ 00:06:34.509 START TEST rpc 00:06:34.509 ************************************ 00:06:34.509 10:35:04 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:34.509 * Looking for test storage... 00:06:34.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.509 10:35:05 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.509 10:35:05 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.509 10:35:05 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.767 10:35:05 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.767 10:35:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.767 10:35:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.767 10:35:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.767 10:35:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.767 10:35:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.767 10:35:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.767 10:35:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.767 10:35:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.767 10:35:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.767 10:35:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.767 10:35:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.767 10:35:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:34.767 10:35:05 rpc -- scripts/common.sh@345 -- # : 1 00:06:34.767 10:35:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.767 10:35:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.767 10:35:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:34.767 10:35:05 rpc -- scripts/common.sh@353 -- # local d=1 00:06:34.767 10:35:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.767 10:35:05 rpc -- scripts/common.sh@355 -- # echo 1 00:06:34.767 10:35:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.767 10:35:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:34.767 10:35:05 rpc -- scripts/common.sh@353 -- # local d=2 00:06:34.767 10:35:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.767 10:35:05 rpc -- scripts/common.sh@355 -- # echo 2 00:06:34.767 10:35:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.767 10:35:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.767 10:35:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.767 10:35:05 rpc -- scripts/common.sh@368 -- # return 0 00:06:34.767 10:35:05 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.767 10:35:05 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.767 --rc genhtml_branch_coverage=1 00:06:34.767 --rc genhtml_function_coverage=1 00:06:34.767 --rc genhtml_legend=1 00:06:34.767 --rc geninfo_all_blocks=1 00:06:34.767 --rc geninfo_unexecuted_blocks=1 00:06:34.767 00:06:34.767 ' 00:06:34.767 10:35:05 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.767 --rc genhtml_branch_coverage=1 00:06:34.767 --rc genhtml_function_coverage=1 00:06:34.768 --rc genhtml_legend=1 00:06:34.768 --rc geninfo_all_blocks=1 00:06:34.768 --rc geninfo_unexecuted_blocks=1 00:06:34.768 00:06:34.768 ' 00:06:34.768 10:35:05 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.768 --rc genhtml_branch_coverage=1 00:06:34.768 --rc genhtml_function_coverage=1 00:06:34.768 --rc genhtml_legend=1 00:06:34.768 --rc geninfo_all_blocks=1 00:06:34.768 --rc geninfo_unexecuted_blocks=1 00:06:34.768 00:06:34.768 ' 00:06:34.768 10:35:05 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.768 --rc genhtml_branch_coverage=1 00:06:34.768 --rc genhtml_function_coverage=1 00:06:34.768 --rc genhtml_legend=1 00:06:34.768 --rc geninfo_all_blocks=1 00:06:34.768 --rc geninfo_unexecuted_blocks=1 00:06:34.768 00:06:34.768 ' 00:06:34.768 10:35:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57155 00:06:34.768 10:35:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.768 10:35:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57155 00:06:34.768 10:35:05 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:34.768 10:35:05 rpc -- common/autotest_common.sh@833 -- # '[' -z 57155 ']' 00:06:34.768 10:35:05 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.768 10:35:05 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.768 10:35:05 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.768 10:35:05 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.768 10:35:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.768 [2024-11-15 10:35:05.284874] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:06:34.768 [2024-11-15 10:35:05.285108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57155 ] 00:06:35.026 [2024-11-15 10:35:05.514762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.284 [2024-11-15 10:35:05.615234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:35.284 [2024-11-15 10:35:05.615309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57155' to capture a snapshot of events at runtime. 00:06:35.284 [2024-11-15 10:35:05.615326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.284 [2024-11-15 10:35:05.615340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.284 [2024-11-15 10:35:05.615365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57155 for offline analysis/debug. 00:06:35.284 [2024-11-15 10:35:05.616562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.850 10:35:06 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:35.850 10:35:06 rpc -- common/autotest_common.sh@866 -- # return 0 00:06:35.850 10:35:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:35.850 10:35:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:35.850 10:35:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:35.850 10:35:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:35.850 10:35:06 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:35.850 10:35:06 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.850 10:35:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.850 ************************************ 00:06:35.850 START TEST rpc_integrity 00:06:35.850 ************************************ 00:06:35.850 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:35.850 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:35.850 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.850 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.850 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.850 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:35.850 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:36.109 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:36.109 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:36.109 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.109 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.109 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.109 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:36.109 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:36.109 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.109 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.109 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.109 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:36.109 { 00:06:36.109 "name": "Malloc0", 00:06:36.109 "aliases": [ 00:06:36.109 "8dde8031-b416-435f-bfca-e2050f86e82b" 00:06:36.109 ], 00:06:36.109 "product_name": "Malloc disk", 00:06:36.109 "block_size": 512, 00:06:36.109 "num_blocks": 16384, 00:06:36.109 "uuid": "8dde8031-b416-435f-bfca-e2050f86e82b", 00:06:36.109 "assigned_rate_limits": { 00:06:36.109 "rw_ios_per_sec": 0, 00:06:36.109 "rw_mbytes_per_sec": 0, 00:06:36.109 "r_mbytes_per_sec": 0, 00:06:36.109 "w_mbytes_per_sec": 0 00:06:36.109 }, 00:06:36.109 "claimed": false, 00:06:36.109 "zoned": false, 00:06:36.109 "supported_io_types": { 00:06:36.109 "read": true, 00:06:36.109 "write": true, 00:06:36.109 "unmap": true, 00:06:36.109 "flush": true, 00:06:36.109 "reset": true, 00:06:36.109 "nvme_admin": false, 00:06:36.109 "nvme_io": false, 00:06:36.109 "nvme_io_md": false, 00:06:36.109 "write_zeroes": true, 00:06:36.109 "zcopy": true, 00:06:36.109 "get_zone_info": false, 00:06:36.109 "zone_management": false, 00:06:36.109 "zone_append": false, 00:06:36.109 "compare": false, 00:06:36.109 "compare_and_write": false, 00:06:36.109 "abort": true, 00:06:36.109 "seek_hole": false, 00:06:36.109 "seek_data": false, 00:06:36.109 "copy": true, 00:06:36.109 "nvme_iov_md": false 00:06:36.109 }, 00:06:36.109 "memory_domains": [ 00:06:36.109 { 00:06:36.109 "dma_device_id": "system", 00:06:36.109 "dma_device_type": 1 00:06:36.109 }, 00:06:36.109 { 00:06:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.110 "dma_device_type": 2 00:06:36.110 } 00:06:36.110 ], 00:06:36.110 "driver_specific": {} 00:06:36.110 } 00:06:36.110 ]' 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 [2024-11-15 10:35:06.536023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:36.110 [2024-11-15 10:35:06.536113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.110 [2024-11-15 10:35:06.536147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:36.110 [2024-11-15 10:35:06.536169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.110 [2024-11-15 10:35:06.539008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.110 [2024-11-15 10:35:06.539060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:36.110 Passthru0 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:36.110 { 00:06:36.110 "name": "Malloc0", 00:06:36.110 "aliases": [ 00:06:36.110 "8dde8031-b416-435f-bfca-e2050f86e82b" 00:06:36.110 ], 00:06:36.110 "product_name": "Malloc disk", 00:06:36.110 "block_size": 512, 00:06:36.110 "num_blocks": 16384, 00:06:36.110 "uuid": "8dde8031-b416-435f-bfca-e2050f86e82b", 00:06:36.110 "assigned_rate_limits": { 00:06:36.110 "rw_ios_per_sec": 0, 00:06:36.110 "rw_mbytes_per_sec": 0, 00:06:36.110 "r_mbytes_per_sec": 0, 00:06:36.110 "w_mbytes_per_sec": 0 00:06:36.110 }, 00:06:36.110 "claimed": true, 00:06:36.110 "claim_type": "exclusive_write", 00:06:36.110 "zoned": false, 00:06:36.110 "supported_io_types": { 00:06:36.110 "read": true, 00:06:36.110 "write": true, 00:06:36.110 "unmap": true, 00:06:36.110 "flush": true, 00:06:36.110 "reset": true, 00:06:36.110 "nvme_admin": false, 00:06:36.110 "nvme_io": false, 00:06:36.110 "nvme_io_md": false, 00:06:36.110 "write_zeroes": true, 00:06:36.110 "zcopy": true, 00:06:36.110 "get_zone_info": false, 00:06:36.110 "zone_management": false, 00:06:36.110 "zone_append": false, 00:06:36.110 "compare": false, 00:06:36.110 "compare_and_write": false, 00:06:36.110 "abort": true, 00:06:36.110 "seek_hole": false, 00:06:36.110 "seek_data": false, 00:06:36.110 "copy": true, 00:06:36.110 "nvme_iov_md": false 00:06:36.110 }, 00:06:36.110 "memory_domains": [ 00:06:36.110 { 00:06:36.110 "dma_device_id": "system", 00:06:36.110 "dma_device_type": 1 00:06:36.110 }, 00:06:36.110 { 00:06:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.110 "dma_device_type": 2 00:06:36.110 } 00:06:36.110 ], 00:06:36.110 "driver_specific": {} 00:06:36.110 }, 00:06:36.110 { 00:06:36.110 "name": "Passthru0", 00:06:36.110 "aliases": [ 00:06:36.110 "38c51558-c5e8-5e3e-ada5-ec2f572d1431" 00:06:36.110 ], 00:06:36.110 "product_name": "passthru", 00:06:36.110 "block_size": 512, 00:06:36.110 "num_blocks": 16384, 00:06:36.110 "uuid": "38c51558-c5e8-5e3e-ada5-ec2f572d1431", 00:06:36.110 "assigned_rate_limits": { 00:06:36.110 "rw_ios_per_sec": 0, 00:06:36.110 "rw_mbytes_per_sec": 0, 00:06:36.110 "r_mbytes_per_sec": 0, 00:06:36.110 "w_mbytes_per_sec": 0 00:06:36.110 }, 00:06:36.110 "claimed": false, 00:06:36.110 "zoned": false, 00:06:36.110 "supported_io_types": { 00:06:36.110 "read": true, 00:06:36.110 "write": true, 00:06:36.110 "unmap": true, 00:06:36.110 "flush": true, 00:06:36.110 "reset": true, 00:06:36.110 "nvme_admin": false, 00:06:36.110 "nvme_io": false, 00:06:36.110 "nvme_io_md": false, 00:06:36.110 "write_zeroes": true, 00:06:36.110 "zcopy": true, 00:06:36.110 "get_zone_info": false, 00:06:36.110 "zone_management": false, 00:06:36.110 "zone_append": false, 00:06:36.110 "compare": false, 00:06:36.110 "compare_and_write": false, 00:06:36.110 "abort": true, 00:06:36.110 "seek_hole": false, 00:06:36.110 "seek_data": false, 00:06:36.110 "copy": true, 00:06:36.110 "nvme_iov_md": false 00:06:36.110 }, 00:06:36.110 "memory_domains": [ 00:06:36.110 { 00:06:36.110 "dma_device_id": "system", 00:06:36.110 "dma_device_type": 1 00:06:36.110 }, 00:06:36.110 { 00:06:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.110 "dma_device_type": 2 00:06:36.110 } 00:06:36.110 ], 00:06:36.110 "driver_specific": { 00:06:36.110 "passthru": { 00:06:36.110 "name": "Passthru0", 00:06:36.110 "base_bdev_name": "Malloc0" 00:06:36.110 } 00:06:36.110 } 00:06:36.110 } 00:06:36.110 ]' 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:36.110 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:36.369 10:35:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:36.369 00:06:36.369 real 0m0.317s 00:06:36.369 user 0m0.200s 00:06:36.369 sys 0m0.032s 00:06:36.369 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.369 10:35:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.369 ************************************ 00:06:36.369 END TEST rpc_integrity 00:06:36.369 ************************************ 00:06:36.369 10:35:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:36.369 10:35:06 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:36.369 10:35:06 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.369 10:35:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.369 ************************************ 00:06:36.369 START TEST rpc_plugins 00:06:36.369 ************************************ 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:36.369 { 00:06:36.369 "name": "Malloc1", 00:06:36.369 "aliases": [ 00:06:36.369 "7edc2a67-78fd-44f8-823c-bed2f8dd7fd8" 00:06:36.369 ], 00:06:36.369 "product_name": "Malloc disk", 00:06:36.369 "block_size": 4096, 00:06:36.369 "num_blocks": 256, 00:06:36.369 "uuid": "7edc2a67-78fd-44f8-823c-bed2f8dd7fd8", 00:06:36.369 "assigned_rate_limits": { 00:06:36.369 "rw_ios_per_sec": 0, 00:06:36.369 "rw_mbytes_per_sec": 0, 00:06:36.369 "r_mbytes_per_sec": 0, 00:06:36.369 "w_mbytes_per_sec": 0 00:06:36.369 }, 00:06:36.369 "claimed": false, 00:06:36.369 "zoned": false, 00:06:36.369 "supported_io_types": { 00:06:36.369 "read": true, 00:06:36.369 "write": true, 00:06:36.369 "unmap": true, 00:06:36.369 "flush": true, 00:06:36.369 "reset": true, 00:06:36.369 "nvme_admin": false, 00:06:36.369 "nvme_io": false, 00:06:36.369 "nvme_io_md": false, 00:06:36.369 "write_zeroes": true, 00:06:36.369 "zcopy": true, 00:06:36.369 "get_zone_info": false, 00:06:36.369 "zone_management": false, 00:06:36.369 "zone_append": false, 00:06:36.369 "compare": false, 00:06:36.369 "compare_and_write": false, 00:06:36.369 "abort": true, 00:06:36.369 "seek_hole": false, 00:06:36.369 "seek_data": false, 00:06:36.369 "copy": true, 00:06:36.369 "nvme_iov_md": false 00:06:36.369 }, 00:06:36.369 "memory_domains": [ 00:06:36.369 { 00:06:36.369 "dma_device_id": "system", 00:06:36.369 "dma_device_type": 1 00:06:36.369 }, 00:06:36.369 { 00:06:36.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.369 "dma_device_type": 2 00:06:36.369 } 00:06:36.369 ], 00:06:36.369 "driver_specific": {} 00:06:36.369 } 00:06:36.369 ]' 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:36.369 10:35:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:36.369 00:06:36.369 real 0m0.167s 00:06:36.369 user 0m0.115s 00:06:36.369 sys 0m0.013s 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.369 ************************************ 00:06:36.369 END TEST rpc_plugins 00:06:36.369 ************************************ 00:06:36.369 10:35:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.628 10:35:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:36.628 10:35:06 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:36.628 10:35:06 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.628 10:35:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.628 ************************************ 00:06:36.628 START TEST rpc_trace_cmd_test 00:06:36.628 ************************************ 00:06:36.628 10:35:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:06:36.628 10:35:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:36.628 10:35:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:36.628 10:35:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.628 10:35:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.628 10:35:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.628 10:35:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:36.628 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57155", 00:06:36.628 "tpoint_group_mask": "0x8", 00:06:36.628 "iscsi_conn": { 00:06:36.628 "mask": "0x2", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "scsi": { 00:06:36.628 "mask": "0x4", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "bdev": { 00:06:36.628 "mask": "0x8", 00:06:36.628 "tpoint_mask": "0xffffffffffffffff" 00:06:36.628 }, 00:06:36.628 "nvmf_rdma": { 00:06:36.628 "mask": "0x10", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "nvmf_tcp": { 00:06:36.628 "mask": "0x20", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "ftl": { 00:06:36.628 "mask": "0x40", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "blobfs": { 00:06:36.628 "mask": "0x80", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "dsa": { 00:06:36.628 "mask": "0x200", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "thread": { 00:06:36.628 "mask": "0x400", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "nvme_pcie": { 00:06:36.628 "mask": "0x800", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "iaa": { 00:06:36.628 "mask": "0x1000", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "nvme_tcp": { 00:06:36.628 "mask": "0x2000", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "bdev_nvme": { 00:06:36.628 "mask": "0x4000", 00:06:36.628 "tpoint_mask": "0x0" 00:06:36.628 }, 00:06:36.628 "sock": { 00:06:36.628 "mask": "0x8000", 00:06:36.629 "tpoint_mask": "0x0" 00:06:36.629 }, 00:06:36.629 "blob": { 00:06:36.629 "mask": "0x10000", 00:06:36.629 "tpoint_mask": "0x0" 00:06:36.629 }, 00:06:36.629 "bdev_raid": { 00:06:36.629 "mask": "0x20000", 00:06:36.629 "tpoint_mask": "0x0" 00:06:36.629 }, 00:06:36.629 "scheduler": { 00:06:36.629 "mask": "0x40000", 00:06:36.629 "tpoint_mask": "0x0" 00:06:36.629 } 00:06:36.629 }' 00:06:36.629 10:35:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:36.629 10:35:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:36.629 10:35:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:36.629 10:35:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:36.629 10:35:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:36.629 10:35:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:36.629 10:35:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:36.629 10:35:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:36.629 10:35:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:36.888 10:35:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:36.888 00:06:36.888 real 0m0.266s 00:06:36.888 user 0m0.231s 00:06:36.888 sys 0m0.026s 00:06:36.888 10:35:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.888 ************************************ 00:06:36.888 END TEST rpc_trace_cmd_test 00:06:36.888 ************************************ 00:06:36.888 10:35:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.888 10:35:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:36.888 10:35:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:36.888 10:35:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:36.888 10:35:07 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:36.888 10:35:07 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.888 10:35:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.888 ************************************ 00:06:36.888 START TEST rpc_daemon_integrity 00:06:36.888 ************************************ 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:36.888 { 00:06:36.888 "name": "Malloc2", 00:06:36.888 "aliases": [ 00:06:36.888 "80ed1454-d244-4df7-993f-12d1289358d9" 00:06:36.888 ], 00:06:36.888 "product_name": "Malloc disk", 00:06:36.888 "block_size": 512, 00:06:36.888 "num_blocks": 16384, 00:06:36.888 "uuid": "80ed1454-d244-4df7-993f-12d1289358d9", 00:06:36.888 "assigned_rate_limits": { 00:06:36.888 "rw_ios_per_sec": 0, 00:06:36.888 "rw_mbytes_per_sec": 0, 00:06:36.888 "r_mbytes_per_sec": 0, 00:06:36.888 "w_mbytes_per_sec": 0 00:06:36.888 }, 00:06:36.888 "claimed": false, 00:06:36.888 "zoned": false, 00:06:36.888 "supported_io_types": { 00:06:36.888 "read": true, 00:06:36.888 "write": true, 00:06:36.888 "unmap": true, 00:06:36.888 "flush": true, 00:06:36.888 "reset": true, 00:06:36.888 "nvme_admin": false, 00:06:36.888 "nvme_io": false, 00:06:36.888 "nvme_io_md": false, 00:06:36.888 "write_zeroes": true, 00:06:36.888 "zcopy": true, 00:06:36.888 "get_zone_info": false, 00:06:36.888 "zone_management": false, 00:06:36.888 "zone_append": false, 00:06:36.888 "compare": false, 00:06:36.888 "compare_and_write": false, 00:06:36.888 "abort": true, 00:06:36.888 "seek_hole": false, 00:06:36.888 "seek_data": false, 00:06:36.888 "copy": true, 00:06:36.888 "nvme_iov_md": false 00:06:36.888 }, 00:06:36.888 "memory_domains": [ 00:06:36.888 { 00:06:36.888 "dma_device_id": "system", 00:06:36.888 "dma_device_type": 1 00:06:36.888 }, 00:06:36.888 { 00:06:36.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.888 "dma_device_type": 2 00:06:36.888 } 00:06:36.888 ], 00:06:36.888 "driver_specific": {} 00:06:36.888 } 00:06:36.888 ]' 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.888 [2024-11-15 10:35:07.438610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:36.888 [2024-11-15 10:35:07.438700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.888 [2024-11-15 10:35:07.438735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:36.888 [2024-11-15 10:35:07.438752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.888 [2024-11-15 10:35:07.441581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.888 [2024-11-15 10:35:07.441634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:36.888 Passthru0 00:06:36.888 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:37.147 { 00:06:37.147 "name": "Malloc2", 00:06:37.147 "aliases": [ 00:06:37.147 "80ed1454-d244-4df7-993f-12d1289358d9" 00:06:37.147 ], 00:06:37.147 "product_name": "Malloc disk", 00:06:37.147 "block_size": 512, 00:06:37.147 "num_blocks": 16384, 00:06:37.147 "uuid": "80ed1454-d244-4df7-993f-12d1289358d9", 00:06:37.147 "assigned_rate_limits": { 00:06:37.147 "rw_ios_per_sec": 0, 00:06:37.147 "rw_mbytes_per_sec": 0, 00:06:37.147 "r_mbytes_per_sec": 0, 00:06:37.147 "w_mbytes_per_sec": 0 00:06:37.147 }, 00:06:37.147 "claimed": true, 00:06:37.147 "claim_type": "exclusive_write", 00:06:37.147 "zoned": false, 00:06:37.147 "supported_io_types": { 00:06:37.147 "read": true, 00:06:37.147 "write": true, 00:06:37.147 "unmap": true, 00:06:37.147 "flush": true, 00:06:37.147 "reset": true, 00:06:37.147 "nvme_admin": false, 00:06:37.147 "nvme_io": false, 00:06:37.147 "nvme_io_md": false, 00:06:37.147 "write_zeroes": true, 00:06:37.147 "zcopy": true, 00:06:37.147 "get_zone_info": false, 00:06:37.147 "zone_management": false, 00:06:37.147 "zone_append": false, 00:06:37.147 "compare": false, 00:06:37.147 "compare_and_write": false, 00:06:37.147 "abort": true, 00:06:37.147 "seek_hole": false, 00:06:37.147 "seek_data": false, 00:06:37.147 "copy": true, 00:06:37.147 "nvme_iov_md": false 00:06:37.147 }, 00:06:37.147 "memory_domains": [ 00:06:37.147 { 00:06:37.147 "dma_device_id": "system", 00:06:37.147 "dma_device_type": 1 00:06:37.147 }, 00:06:37.147 { 00:06:37.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.147 "dma_device_type": 2 00:06:37.147 } 00:06:37.147 ], 00:06:37.147 "driver_specific": {} 00:06:37.147 }, 00:06:37.147 { 00:06:37.147 "name": "Passthru0", 00:06:37.147 "aliases": [ 00:06:37.147 "11605e92-58bf-5846-adfc-4132d8581c10" 00:06:37.147 ], 00:06:37.147 "product_name": "passthru", 00:06:37.147 "block_size": 512, 00:06:37.147 "num_blocks": 16384, 00:06:37.147 "uuid": "11605e92-58bf-5846-adfc-4132d8581c10", 00:06:37.147 "assigned_rate_limits": { 00:06:37.147 "rw_ios_per_sec": 0, 00:06:37.147 "rw_mbytes_per_sec": 0, 00:06:37.147 "r_mbytes_per_sec": 0, 00:06:37.147 "w_mbytes_per_sec": 0 00:06:37.147 }, 00:06:37.147 "claimed": false, 00:06:37.147 "zoned": false, 00:06:37.147 "supported_io_types": { 00:06:37.147 "read": true, 00:06:37.147 "write": true, 00:06:37.147 "unmap": true, 00:06:37.147 "flush": true, 00:06:37.147 "reset": true, 00:06:37.147 "nvme_admin": false, 00:06:37.147 "nvme_io": false, 00:06:37.147 "nvme_io_md": false, 00:06:37.147 "write_zeroes": true, 00:06:37.147 "zcopy": true, 00:06:37.147 "get_zone_info": false, 00:06:37.147 "zone_management": false, 00:06:37.147 "zone_append": false, 00:06:37.147 "compare": false, 00:06:37.147 "compare_and_write": false, 00:06:37.147 "abort": true, 00:06:37.147 "seek_hole": false, 00:06:37.147 "seek_data": false, 00:06:37.147 "copy": true, 00:06:37.147 "nvme_iov_md": false 00:06:37.147 }, 00:06:37.147 "memory_domains": [ 00:06:37.147 { 00:06:37.147 "dma_device_id": "system", 00:06:37.147 "dma_device_type": 1 00:06:37.147 }, 00:06:37.147 { 00:06:37.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.147 "dma_device_type": 2 00:06:37.147 } 00:06:37.147 ], 00:06:37.147 "driver_specific": { 00:06:37.147 "passthru": { 00:06:37.147 "name": "Passthru0", 00:06:37.147 "base_bdev_name": "Malloc2" 00:06:37.147 } 00:06:37.147 } 00:06:37.147 } 00:06:37.147 ]' 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:37.147 00:06:37.147 real 0m0.347s 00:06:37.147 user 0m0.220s 00:06:37.147 sys 0m0.038s 00:06:37.147 ************************************ 00:06:37.147 END TEST rpc_daemon_integrity 00:06:37.147 ************************************ 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.147 10:35:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.147 10:35:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:37.147 10:35:07 rpc -- rpc/rpc.sh@84 -- # killprocess 57155 00:06:37.147 10:35:07 rpc -- common/autotest_common.sh@952 -- # '[' -z 57155 ']' 00:06:37.147 10:35:07 rpc -- common/autotest_common.sh@956 -- # kill -0 57155 00:06:37.147 10:35:07 rpc -- common/autotest_common.sh@957 -- # uname 00:06:37.147 10:35:07 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:37.147 10:35:07 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57155 00:06:37.147 10:35:07 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:37.148 10:35:07 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:37.148 killing process with pid 57155 00:06:37.148 10:35:07 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57155' 00:06:37.148 10:35:07 rpc -- common/autotest_common.sh@971 -- # kill 57155 00:06:37.148 10:35:07 rpc -- common/autotest_common.sh@976 -- # wait 57155 00:06:39.693 00:06:39.693 real 0m4.817s 00:06:39.693 user 0m5.726s 00:06:39.693 sys 0m0.685s 00:06:39.693 10:35:09 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.693 10:35:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.693 ************************************ 00:06:39.693 END TEST rpc 00:06:39.693 ************************************ 00:06:39.693 10:35:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:39.693 10:35:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.693 10:35:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.693 10:35:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.693 ************************************ 00:06:39.693 START TEST skip_rpc 00:06:39.693 ************************************ 00:06:39.693 10:35:09 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:39.693 * Looking for test storage... 00:06:39.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:39.693 10:35:09 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:39.693 10:35:09 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:39.693 10:35:09 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:39.693 10:35:10 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.693 10:35:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:39.693 10:35:10 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.694 10:35:10 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:39.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.694 --rc genhtml_branch_coverage=1 00:06:39.694 --rc genhtml_function_coverage=1 00:06:39.694 --rc genhtml_legend=1 00:06:39.694 --rc geninfo_all_blocks=1 00:06:39.694 --rc geninfo_unexecuted_blocks=1 00:06:39.694 00:06:39.694 ' 00:06:39.694 10:35:10 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:39.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.694 --rc genhtml_branch_coverage=1 00:06:39.694 --rc genhtml_function_coverage=1 00:06:39.694 --rc genhtml_legend=1 00:06:39.694 --rc geninfo_all_blocks=1 00:06:39.694 --rc geninfo_unexecuted_blocks=1 00:06:39.694 00:06:39.694 ' 00:06:39.694 10:35:10 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:39.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.694 --rc genhtml_branch_coverage=1 00:06:39.694 --rc genhtml_function_coverage=1 00:06:39.694 --rc genhtml_legend=1 00:06:39.694 --rc geninfo_all_blocks=1 00:06:39.694 --rc geninfo_unexecuted_blocks=1 00:06:39.694 00:06:39.694 ' 00:06:39.694 10:35:10 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:39.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.694 --rc genhtml_branch_coverage=1 00:06:39.694 --rc genhtml_function_coverage=1 00:06:39.694 --rc genhtml_legend=1 00:06:39.694 --rc geninfo_all_blocks=1 00:06:39.694 --rc geninfo_unexecuted_blocks=1 00:06:39.694 00:06:39.694 ' 00:06:39.694 10:35:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:39.694 10:35:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:39.694 10:35:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:39.694 10:35:10 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.694 10:35:10 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.694 10:35:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.694 ************************************ 00:06:39.694 START TEST skip_rpc 00:06:39.694 ************************************ 00:06:39.694 10:35:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:06:39.694 10:35:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57379 00:06:39.694 10:35:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:39.694 10:35:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.694 10:35:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:39.694 [2024-11-15 10:35:10.174930] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:06:39.694 [2024-11-15 10:35:10.175088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57379 ] 00:06:39.952 [2024-11-15 10:35:10.351035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.952 [2024-11-15 10:35:10.456460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57379 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57379 ']' 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57379 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57379 00:06:45.223 killing process with pid 57379 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57379' 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57379 00:06:45.223 10:35:15 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57379 00:06:47.150 ************************************ 00:06:47.150 END TEST skip_rpc 00:06:47.150 ************************************ 00:06:47.150 00:06:47.150 real 0m7.230s 00:06:47.150 user 0m6.785s 00:06:47.150 sys 0m0.329s 00:06:47.150 10:35:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.150 10:35:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.150 10:35:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:47.150 10:35:17 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.150 10:35:17 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.150 10:35:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.150 ************************************ 00:06:47.150 START TEST skip_rpc_with_json 00:06:47.150 ************************************ 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57486 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57486 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57486 ']' 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:47.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:47.150 10:35:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.150 [2024-11-15 10:35:17.460296] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:06:47.150 [2024-11-15 10:35:17.460471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57486 ] 00:06:47.150 [2024-11-15 10:35:17.630896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.407 [2024-11-15 10:35:17.738394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.972 [2024-11-15 10:35:18.508041] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:47.972 request: 00:06:47.972 { 00:06:47.972 "trtype": "tcp", 00:06:47.972 "method": "nvmf_get_transports", 00:06:47.972 "req_id": 1 00:06:47.972 } 00:06:47.972 Got JSON-RPC error response 00:06:47.972 response: 00:06:47.972 { 00:06:47.972 "code": -19, 00:06:47.972 "message": "No such device" 00:06:47.972 } 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.972 [2024-11-15 10:35:18.520226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.972 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.232 10:35:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:48.232 { 00:06:48.232 "subsystems": [ 00:06:48.232 { 00:06:48.232 "subsystem": "fsdev", 00:06:48.232 "config": [ 00:06:48.232 { 00:06:48.232 "method": "fsdev_set_opts", 00:06:48.232 "params": { 00:06:48.232 "fsdev_io_pool_size": 65535, 00:06:48.232 "fsdev_io_cache_size": 256 00:06:48.232 } 00:06:48.232 } 00:06:48.232 ] 00:06:48.232 }, 00:06:48.232 { 00:06:48.232 "subsystem": "keyring", 00:06:48.232 "config": [] 00:06:48.232 }, 00:06:48.232 { 00:06:48.232 "subsystem": "iobuf", 00:06:48.232 "config": [ 00:06:48.232 { 00:06:48.232 "method": "iobuf_set_options", 00:06:48.232 "params": { 00:06:48.232 "small_pool_count": 8192, 00:06:48.232 "large_pool_count": 1024, 00:06:48.232 "small_bufsize": 8192, 00:06:48.232 "large_bufsize": 135168, 00:06:48.232 "enable_numa": false 00:06:48.232 } 00:06:48.232 } 00:06:48.232 ] 00:06:48.232 }, 00:06:48.232 { 00:06:48.232 "subsystem": "sock", 00:06:48.232 "config": [ 00:06:48.232 { 00:06:48.232 "method": "sock_set_default_impl", 00:06:48.232 "params": { 00:06:48.232 "impl_name": "posix" 00:06:48.232 } 00:06:48.232 }, 00:06:48.232 { 00:06:48.232 "method": "sock_impl_set_options", 00:06:48.232 "params": { 00:06:48.232 "impl_name": "ssl", 00:06:48.232 "recv_buf_size": 4096, 00:06:48.232 "send_buf_size": 4096, 00:06:48.232 "enable_recv_pipe": true, 00:06:48.232 "enable_quickack": false, 00:06:48.232 "enable_placement_id": 0, 00:06:48.232 "enable_zerocopy_send_server": true, 00:06:48.232 "enable_zerocopy_send_client": false, 00:06:48.232 "zerocopy_threshold": 0, 00:06:48.232 "tls_version": 0, 00:06:48.232 "enable_ktls": false 00:06:48.232 } 00:06:48.232 }, 00:06:48.232 { 00:06:48.232 "method": "sock_impl_set_options", 00:06:48.232 "params": { 00:06:48.232 "impl_name": "posix", 00:06:48.232 "recv_buf_size": 2097152, 00:06:48.232 "send_buf_size": 2097152, 00:06:48.232 "enable_recv_pipe": true, 00:06:48.232 "enable_quickack": false, 00:06:48.232 "enable_placement_id": 0, 00:06:48.232 "enable_zerocopy_send_server": true, 00:06:48.232 "enable_zerocopy_send_client": false, 00:06:48.232 "zerocopy_threshold": 0, 00:06:48.232 "tls_version": 0, 00:06:48.232 "enable_ktls": false 00:06:48.232 } 00:06:48.232 } 00:06:48.232 ] 00:06:48.232 }, 00:06:48.232 { 00:06:48.232 "subsystem": "vmd", 00:06:48.232 "config": [] 00:06:48.232 }, 00:06:48.232 { 00:06:48.232 "subsystem": "accel", 00:06:48.232 "config": [ 00:06:48.232 { 00:06:48.232 "method": "accel_set_options", 00:06:48.232 "params": { 00:06:48.232 "small_cache_size": 128, 00:06:48.232 "large_cache_size": 16, 00:06:48.232 "task_count": 2048, 00:06:48.232 "sequence_count": 2048, 00:06:48.233 "buf_count": 2048 00:06:48.233 } 00:06:48.233 } 00:06:48.233 ] 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "subsystem": "bdev", 00:06:48.233 "config": [ 00:06:48.233 { 00:06:48.233 "method": "bdev_set_options", 00:06:48.233 "params": { 00:06:48.233 "bdev_io_pool_size": 65535, 00:06:48.233 "bdev_io_cache_size": 256, 00:06:48.233 "bdev_auto_examine": true, 00:06:48.233 "iobuf_small_cache_size": 128, 00:06:48.233 "iobuf_large_cache_size": 16 00:06:48.233 } 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "method": "bdev_raid_set_options", 00:06:48.233 "params": { 00:06:48.233 "process_window_size_kb": 1024, 00:06:48.233 "process_max_bandwidth_mb_sec": 0 00:06:48.233 } 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "method": "bdev_iscsi_set_options", 00:06:48.233 "params": { 00:06:48.233 "timeout_sec": 30 00:06:48.233 } 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "method": "bdev_nvme_set_options", 00:06:48.233 "params": { 00:06:48.233 "action_on_timeout": "none", 00:06:48.233 "timeout_us": 0, 00:06:48.233 "timeout_admin_us": 0, 00:06:48.233 "keep_alive_timeout_ms": 10000, 00:06:48.233 "arbitration_burst": 0, 00:06:48.233 "low_priority_weight": 0, 00:06:48.233 "medium_priority_weight": 0, 00:06:48.233 "high_priority_weight": 0, 00:06:48.233 "nvme_adminq_poll_period_us": 10000, 00:06:48.233 "nvme_ioq_poll_period_us": 0, 00:06:48.233 "io_queue_requests": 0, 00:06:48.233 "delay_cmd_submit": true, 00:06:48.233 "transport_retry_count": 4, 00:06:48.233 "bdev_retry_count": 3, 00:06:48.233 "transport_ack_timeout": 0, 00:06:48.233 "ctrlr_loss_timeout_sec": 0, 00:06:48.233 "reconnect_delay_sec": 0, 00:06:48.233 "fast_io_fail_timeout_sec": 0, 00:06:48.233 "disable_auto_failback": false, 00:06:48.233 "generate_uuids": false, 00:06:48.233 "transport_tos": 0, 00:06:48.233 "nvme_error_stat": false, 00:06:48.233 "rdma_srq_size": 0, 00:06:48.233 "io_path_stat": false, 00:06:48.233 "allow_accel_sequence": false, 00:06:48.233 "rdma_max_cq_size": 0, 00:06:48.233 "rdma_cm_event_timeout_ms": 0, 00:06:48.233 "dhchap_digests": [ 00:06:48.233 "sha256", 00:06:48.233 "sha384", 00:06:48.233 "sha512" 00:06:48.233 ], 00:06:48.233 "dhchap_dhgroups": [ 00:06:48.233 "null", 00:06:48.233 "ffdhe2048", 00:06:48.233 "ffdhe3072", 00:06:48.233 "ffdhe4096", 00:06:48.233 "ffdhe6144", 00:06:48.233 "ffdhe8192" 00:06:48.233 ] 00:06:48.233 } 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "method": "bdev_nvme_set_hotplug", 00:06:48.233 "params": { 00:06:48.233 "period_us": 100000, 00:06:48.233 "enable": false 00:06:48.233 } 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "method": "bdev_wait_for_examine" 00:06:48.233 } 00:06:48.233 ] 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "subsystem": "scsi", 00:06:48.233 "config": null 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "subsystem": "scheduler", 00:06:48.233 "config": [ 00:06:48.233 { 00:06:48.233 "method": "framework_set_scheduler", 00:06:48.233 "params": { 00:06:48.233 "name": "static" 00:06:48.233 } 00:06:48.233 } 00:06:48.233 ] 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "subsystem": "vhost_scsi", 00:06:48.233 "config": [] 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "subsystem": "vhost_blk", 00:06:48.233 "config": [] 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "subsystem": "ublk", 00:06:48.233 "config": [] 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "subsystem": "nbd", 00:06:48.233 "config": [] 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "subsystem": "nvmf", 00:06:48.233 "config": [ 00:06:48.233 { 00:06:48.233 "method": "nvmf_set_config", 00:06:48.233 "params": { 00:06:48.233 "discovery_filter": "match_any", 00:06:48.233 "admin_cmd_passthru": { 00:06:48.233 "identify_ctrlr": false 00:06:48.233 }, 00:06:48.233 "dhchap_digests": [ 00:06:48.233 "sha256", 00:06:48.233 "sha384", 00:06:48.233 "sha512" 00:06:48.233 ], 00:06:48.233 "dhchap_dhgroups": [ 00:06:48.233 "null", 00:06:48.233 "ffdhe2048", 00:06:48.233 "ffdhe3072", 00:06:48.233 "ffdhe4096", 00:06:48.233 "ffdhe6144", 00:06:48.233 "ffdhe8192" 00:06:48.233 ] 00:06:48.233 } 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "method": "nvmf_set_max_subsystems", 00:06:48.233 "params": { 00:06:48.233 "max_subsystems": 1024 00:06:48.233 } 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "method": "nvmf_set_crdt", 00:06:48.233 "params": { 00:06:48.233 "crdt1": 0, 00:06:48.233 "crdt2": 0, 00:06:48.233 "crdt3": 0 00:06:48.233 } 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "method": "nvmf_create_transport", 00:06:48.233 "params": { 00:06:48.233 "trtype": "TCP", 00:06:48.233 "max_queue_depth": 128, 00:06:48.233 "max_io_qpairs_per_ctrlr": 127, 00:06:48.233 "in_capsule_data_size": 4096, 00:06:48.233 "max_io_size": 131072, 00:06:48.233 "io_unit_size": 131072, 00:06:48.233 "max_aq_depth": 128, 00:06:48.233 "num_shared_buffers": 511, 00:06:48.233 "buf_cache_size": 4294967295, 00:06:48.233 "dif_insert_or_strip": false, 00:06:48.233 "zcopy": false, 00:06:48.233 "c2h_success": true, 00:06:48.233 "sock_priority": 0, 00:06:48.233 "abort_timeout_sec": 1, 00:06:48.233 "ack_timeout": 0, 00:06:48.233 "data_wr_pool_size": 0 00:06:48.233 } 00:06:48.233 } 00:06:48.233 ] 00:06:48.233 }, 00:06:48.233 { 00:06:48.233 "subsystem": "iscsi", 00:06:48.233 "config": [ 00:06:48.233 { 00:06:48.233 "method": "iscsi_set_options", 00:06:48.233 "params": { 00:06:48.233 "node_base": "iqn.2016-06.io.spdk", 00:06:48.233 "max_sessions": 128, 00:06:48.233 "max_connections_per_session": 2, 00:06:48.233 "max_queue_depth": 64, 00:06:48.233 "default_time2wait": 2, 00:06:48.233 "default_time2retain": 20, 00:06:48.233 "first_burst_length": 8192, 00:06:48.233 "immediate_data": true, 00:06:48.233 "allow_duplicated_isid": false, 00:06:48.233 "error_recovery_level": 0, 00:06:48.233 "nop_timeout": 60, 00:06:48.233 "nop_in_interval": 30, 00:06:48.233 "disable_chap": false, 00:06:48.233 "require_chap": false, 00:06:48.233 "mutual_chap": false, 00:06:48.233 "chap_group": 0, 00:06:48.233 "max_large_datain_per_connection": 64, 00:06:48.233 "max_r2t_per_connection": 4, 00:06:48.233 "pdu_pool_size": 36864, 00:06:48.233 "immediate_data_pool_size": 16384, 00:06:48.233 "data_out_pool_size": 2048 00:06:48.233 } 00:06:48.233 } 00:06:48.233 ] 00:06:48.233 } 00:06:48.233 ] 00:06:48.233 } 00:06:48.233 10:35:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:48.233 10:35:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57486 00:06:48.233 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57486 ']' 00:06:48.233 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57486 00:06:48.233 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:48.233 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:48.233 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57486 00:06:48.233 killing process with pid 57486 00:06:48.233 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:48.234 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:48.234 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57486' 00:06:48.234 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57486 00:06:48.234 10:35:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57486 00:06:50.767 10:35:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57539 00:06:50.767 10:35:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:50.767 10:35:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57539 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57539 ']' 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57539 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57539 00:06:56.109 killing process with pid 57539 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57539' 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57539 00:06:56.109 10:35:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57539 00:06:57.485 10:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:57.485 10:35:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:57.485 00:06:57.485 real 0m10.643s 00:06:57.485 user 0m10.289s 00:06:57.485 sys 0m0.701s 00:06:57.485 10:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.485 ************************************ 00:06:57.485 END TEST skip_rpc_with_json 00:06:57.485 ************************************ 00:06:57.485 10:35:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.485 10:35:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:57.485 10:35:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:57.485 10:35:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.485 10:35:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.485 ************************************ 00:06:57.485 START TEST skip_rpc_with_delay 00:06:57.485 ************************************ 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:57.485 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:57.743 [2024-11-15 10:35:28.150321] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:57.743 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:57.743 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.743 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.743 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.743 ************************************ 00:06:57.743 END TEST skip_rpc_with_delay 00:06:57.743 ************************************ 00:06:57.743 00:06:57.743 real 0m0.193s 00:06:57.743 user 0m0.116s 00:06:57.743 sys 0m0.075s 00:06:57.743 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.743 10:35:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:57.743 10:35:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:57.743 10:35:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:57.743 10:35:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:57.743 10:35:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:57.743 10:35:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.743 10:35:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.743 ************************************ 00:06:57.743 START TEST exit_on_failed_rpc_init 00:06:57.743 ************************************ 00:06:57.743 10:35:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:57.743 10:35:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57667 00:06:57.743 10:35:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57667 00:06:57.743 10:35:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57667 ']' 00:06:57.743 10:35:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.743 10:35:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.743 10:35:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.743 10:35:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.743 10:35:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.743 10:35:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:58.002 [2024-11-15 10:35:28.377585] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:06:58.002 [2024-11-15 10:35:28.377747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57667 ] 00:06:58.002 [2024-11-15 10:35:28.552554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.261 [2024-11-15 10:35:28.692163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:59.193 10:35:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:59.193 [2024-11-15 10:35:29.675850] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:06:59.193 [2024-11-15 10:35:29.676057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57685 ] 00:06:59.451 [2024-11-15 10:35:29.900597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.709 [2024-11-15 10:35:30.063125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.709 [2024-11-15 10:35:30.063253] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:59.709 [2024-11-15 10:35:30.063276] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:59.709 [2024-11-15 10:35:30.063291] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57667 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57667 ']' 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57667 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57667 00:06:59.998 killing process with pid 57667 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57667' 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57667 00:06:59.998 10:35:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57667 00:07:02.528 ************************************ 00:07:02.528 END TEST exit_on_failed_rpc_init 00:07:02.528 ************************************ 00:07:02.528 00:07:02.528 real 0m4.194s 00:07:02.528 user 0m4.922s 00:07:02.528 sys 0m0.533s 00:07:02.528 10:35:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.528 10:35:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:02.528 10:35:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:02.528 00:07:02.528 real 0m22.640s 00:07:02.528 user 0m22.284s 00:07:02.528 sys 0m1.837s 00:07:02.528 ************************************ 00:07:02.528 END TEST skip_rpc 00:07:02.528 ************************************ 00:07:02.528 10:35:32 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.528 10:35:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.528 10:35:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:02.528 10:35:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.528 10:35:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.528 10:35:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.528 ************************************ 00:07:02.528 START TEST rpc_client 00:07:02.528 ************************************ 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:02.528 * Looking for test storage... 00:07:02.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.528 10:35:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.528 --rc genhtml_branch_coverage=1 00:07:02.528 --rc genhtml_function_coverage=1 00:07:02.528 --rc genhtml_legend=1 00:07:02.528 --rc geninfo_all_blocks=1 00:07:02.528 --rc geninfo_unexecuted_blocks=1 00:07:02.528 00:07:02.528 ' 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.528 --rc genhtml_branch_coverage=1 00:07:02.528 --rc genhtml_function_coverage=1 00:07:02.528 --rc genhtml_legend=1 00:07:02.528 --rc geninfo_all_blocks=1 00:07:02.528 --rc geninfo_unexecuted_blocks=1 00:07:02.528 00:07:02.528 ' 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.528 --rc genhtml_branch_coverage=1 00:07:02.528 --rc genhtml_function_coverage=1 00:07:02.528 --rc genhtml_legend=1 00:07:02.528 --rc geninfo_all_blocks=1 00:07:02.528 --rc geninfo_unexecuted_blocks=1 00:07:02.528 00:07:02.528 ' 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.528 --rc genhtml_branch_coverage=1 00:07:02.528 --rc genhtml_function_coverage=1 00:07:02.528 --rc genhtml_legend=1 00:07:02.528 --rc geninfo_all_blocks=1 00:07:02.528 --rc geninfo_unexecuted_blocks=1 00:07:02.528 00:07:02.528 ' 00:07:02.528 10:35:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:02.528 OK 00:07:02.528 10:35:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:02.528 ************************************ 00:07:02.528 END TEST rpc_client 00:07:02.528 ************************************ 00:07:02.528 00:07:02.528 real 0m0.228s 00:07:02.528 user 0m0.133s 00:07:02.528 sys 0m0.102s 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.528 10:35:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:02.528 10:35:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:02.528 10:35:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.528 10:35:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.528 10:35:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.528 ************************************ 00:07:02.528 START TEST json_config 00:07:02.528 ************************************ 00:07:02.528 10:35:32 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:02.528 10:35:32 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.528 10:35:32 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.528 10:35:32 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.528 10:35:32 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.528 10:35:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.528 10:35:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.528 10:35:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.528 10:35:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.528 10:35:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.528 10:35:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.528 10:35:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.528 10:35:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.528 10:35:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.528 10:35:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.528 10:35:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.528 10:35:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:02.528 10:35:32 json_config -- scripts/common.sh@345 -- # : 1 00:07:02.528 10:35:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.528 10:35:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.528 10:35:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:02.528 10:35:32 json_config -- scripts/common.sh@353 -- # local d=1 00:07:02.528 10:35:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.528 10:35:32 json_config -- scripts/common.sh@355 -- # echo 1 00:07:02.528 10:35:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.528 10:35:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:02.528 10:35:32 json_config -- scripts/common.sh@353 -- # local d=2 00:07:02.528 10:35:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.528 10:35:32 json_config -- scripts/common.sh@355 -- # echo 2 00:07:02.528 10:35:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.528 10:35:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.528 10:35:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.528 10:35:32 json_config -- scripts/common.sh@368 -- # return 0 00:07:02.528 10:35:32 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.528 10:35:32 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.528 --rc genhtml_branch_coverage=1 00:07:02.528 --rc genhtml_function_coverage=1 00:07:02.528 --rc genhtml_legend=1 00:07:02.528 --rc geninfo_all_blocks=1 00:07:02.528 --rc geninfo_unexecuted_blocks=1 00:07:02.528 00:07:02.528 ' 00:07:02.528 10:35:32 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.528 --rc genhtml_branch_coverage=1 00:07:02.528 --rc genhtml_function_coverage=1 00:07:02.528 --rc genhtml_legend=1 00:07:02.528 --rc geninfo_all_blocks=1 00:07:02.528 --rc geninfo_unexecuted_blocks=1 00:07:02.528 00:07:02.528 ' 00:07:02.528 10:35:32 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.528 --rc genhtml_branch_coverage=1 00:07:02.528 --rc genhtml_function_coverage=1 00:07:02.528 --rc genhtml_legend=1 00:07:02.528 --rc geninfo_all_blocks=1 00:07:02.528 --rc geninfo_unexecuted_blocks=1 00:07:02.528 00:07:02.528 ' 00:07:02.528 10:35:32 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.528 --rc genhtml_branch_coverage=1 00:07:02.528 --rc genhtml_function_coverage=1 00:07:02.528 --rc genhtml_legend=1 00:07:02.528 --rc geninfo_all_blocks=1 00:07:02.528 --rc geninfo_unexecuted_blocks=1 00:07:02.528 00:07:02.528 ' 00:07:02.528 10:35:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:02.528 10:35:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:02.528 10:35:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.528 10:35:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.528 10:35:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.528 10:35:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.528 10:35:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.528 10:35:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:762637b5-3988-4abf-ad8d-a7e0f3892a8d 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=762637b5-3988-4abf-ad8d-a7e0f3892a8d 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.529 10:35:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.529 10:35:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.529 10:35:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.529 10:35:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.529 10:35:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.529 10:35:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.529 10:35:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.529 10:35:32 json_config -- paths/export.sh@5 -- # export PATH 00:07:02.529 10:35:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@51 -- # : 0 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.529 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.529 10:35:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.529 WARNING: No tests are enabled so not running JSON configuration tests 00:07:02.529 10:35:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:02.529 10:35:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:02.529 10:35:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:02.529 10:35:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:02.529 10:35:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:02.529 10:35:32 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:02.529 10:35:32 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:02.529 00:07:02.529 real 0m0.173s 00:07:02.529 user 0m0.107s 00:07:02.529 sys 0m0.068s 00:07:02.529 10:35:32 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.529 10:35:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.529 ************************************ 00:07:02.529 END TEST json_config 00:07:02.529 ************************************ 00:07:02.529 10:35:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:02.529 10:35:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.529 10:35:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.529 10:35:33 -- common/autotest_common.sh@10 -- # set +x 00:07:02.529 ************************************ 00:07:02.529 START TEST json_config_extra_key 00:07:02.529 ************************************ 00:07:02.529 10:35:33 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:02.529 10:35:33 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.529 10:35:33 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.529 10:35:33 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.789 10:35:33 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.789 10:35:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:02.789 10:35:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.789 10:35:33 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.790 --rc genhtml_branch_coverage=1 00:07:02.790 --rc genhtml_function_coverage=1 00:07:02.790 --rc genhtml_legend=1 00:07:02.790 --rc geninfo_all_blocks=1 00:07:02.790 --rc geninfo_unexecuted_blocks=1 00:07:02.790 00:07:02.790 ' 00:07:02.790 10:35:33 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.790 --rc genhtml_branch_coverage=1 00:07:02.790 --rc genhtml_function_coverage=1 00:07:02.790 --rc genhtml_legend=1 00:07:02.790 --rc geninfo_all_blocks=1 00:07:02.790 --rc geninfo_unexecuted_blocks=1 00:07:02.790 00:07:02.790 ' 00:07:02.790 10:35:33 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.790 --rc genhtml_branch_coverage=1 00:07:02.790 --rc genhtml_function_coverage=1 00:07:02.790 --rc genhtml_legend=1 00:07:02.790 --rc geninfo_all_blocks=1 00:07:02.790 --rc geninfo_unexecuted_blocks=1 00:07:02.790 00:07:02.790 ' 00:07:02.790 10:35:33 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.790 --rc genhtml_branch_coverage=1 00:07:02.790 --rc genhtml_function_coverage=1 00:07:02.790 --rc genhtml_legend=1 00:07:02.790 --rc geninfo_all_blocks=1 00:07:02.790 --rc geninfo_unexecuted_blocks=1 00:07:02.790 00:07:02.790 ' 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:762637b5-3988-4abf-ad8d-a7e0f3892a8d 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=762637b5-3988-4abf-ad8d-a7e0f3892a8d 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.790 10:35:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.790 10:35:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.790 10:35:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.790 10:35:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.790 10:35:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.790 10:35:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.790 10:35:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.790 10:35:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:02.790 10:35:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.790 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.790 10:35:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:02.790 INFO: launching applications... 00:07:02.790 Waiting for target to run... 00:07:02.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:02.790 10:35:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57895 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57895 /var/tmp/spdk_tgt.sock 00:07:02.790 10:35:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:02.790 10:35:33 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57895 ']' 00:07:02.790 10:35:33 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:02.790 10:35:33 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:02.790 10:35:33 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:02.790 10:35:33 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:02.790 10:35:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:02.790 [2024-11-15 10:35:33.335699] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:02.790 [2024-11-15 10:35:33.335871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57895 ] 00:07:03.358 [2024-11-15 10:35:33.677599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.358 [2024-11-15 10:35:33.773086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.971 10:35:34 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.971 10:35:34 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:07:03.971 00:07:03.971 10:35:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:03.971 INFO: shutting down applications... 00:07:03.971 10:35:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:03.971 10:35:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:03.971 10:35:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:03.971 10:35:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:03.971 10:35:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57895 ]] 00:07:03.971 10:35:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57895 00:07:03.971 10:35:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:03.971 10:35:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:03.971 10:35:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57895 00:07:03.971 10:35:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:04.535 10:35:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:04.535 10:35:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.535 10:35:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57895 00:07:04.535 10:35:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:05.102 10:35:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:05.102 10:35:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:05.102 10:35:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57895 00:07:05.102 10:35:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:05.668 10:35:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:05.668 10:35:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:05.668 10:35:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57895 00:07:05.668 10:35:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:05.926 10:35:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:05.926 10:35:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:05.926 10:35:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57895 00:07:05.926 10:35:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:06.493 SPDK target shutdown done 00:07:06.493 10:35:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:06.493 10:35:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:06.493 10:35:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57895 00:07:06.493 10:35:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:06.493 10:35:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:06.493 10:35:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:06.493 10:35:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:06.493 Success 00:07:06.493 10:35:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:06.493 00:07:06.493 real 0m3.953s 00:07:06.493 user 0m3.864s 00:07:06.493 sys 0m0.489s 00:07:06.493 10:35:36 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.493 10:35:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:06.493 ************************************ 00:07:06.493 END TEST json_config_extra_key 00:07:06.493 ************************************ 00:07:06.493 10:35:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:06.493 10:35:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:06.493 10:35:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.493 10:35:37 -- common/autotest_common.sh@10 -- # set +x 00:07:06.493 ************************************ 00:07:06.493 START TEST alias_rpc 00:07:06.493 ************************************ 00:07:06.493 10:35:37 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:06.752 * Looking for test storage... 00:07:06.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:06.752 10:35:37 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:06.752 10:35:37 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:06.752 10:35:37 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:06.752 10:35:37 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:06.752 10:35:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.753 10:35:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:06.753 10:35:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:06.753 10:35:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.753 10:35:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:06.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.753 10:35:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.753 10:35:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.753 10:35:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.753 10:35:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.753 --rc genhtml_branch_coverage=1 00:07:06.753 --rc genhtml_function_coverage=1 00:07:06.753 --rc genhtml_legend=1 00:07:06.753 --rc geninfo_all_blocks=1 00:07:06.753 --rc geninfo_unexecuted_blocks=1 00:07:06.753 00:07:06.753 ' 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.753 --rc genhtml_branch_coverage=1 00:07:06.753 --rc genhtml_function_coverage=1 00:07:06.753 --rc genhtml_legend=1 00:07:06.753 --rc geninfo_all_blocks=1 00:07:06.753 --rc geninfo_unexecuted_blocks=1 00:07:06.753 00:07:06.753 ' 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.753 --rc genhtml_branch_coverage=1 00:07:06.753 --rc genhtml_function_coverage=1 00:07:06.753 --rc genhtml_legend=1 00:07:06.753 --rc geninfo_all_blocks=1 00:07:06.753 --rc geninfo_unexecuted_blocks=1 00:07:06.753 00:07:06.753 ' 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:06.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.753 --rc genhtml_branch_coverage=1 00:07:06.753 --rc genhtml_function_coverage=1 00:07:06.753 --rc genhtml_legend=1 00:07:06.753 --rc geninfo_all_blocks=1 00:07:06.753 --rc geninfo_unexecuted_blocks=1 00:07:06.753 00:07:06.753 ' 00:07:06.753 10:35:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:06.753 10:35:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58000 00:07:06.753 10:35:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58000 00:07:06.753 10:35:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 58000 ']' 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:06.753 10:35:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.753 [2024-11-15 10:35:37.306065] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:06.753 [2024-11-15 10:35:37.306490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58000 ] 00:07:07.011 [2024-11-15 10:35:37.493650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.269 [2024-11-15 10:35:37.619235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.204 10:35:38 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.204 10:35:38 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:08.204 10:35:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:08.462 10:35:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58000 00:07:08.462 10:35:38 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 58000 ']' 00:07:08.462 10:35:38 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 58000 00:07:08.462 10:35:38 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:07:08.462 10:35:38 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.462 10:35:38 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58000 00:07:08.462 10:35:38 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:08.462 10:35:38 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:08.462 10:35:38 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58000' 00:07:08.462 killing process with pid 58000 00:07:08.462 10:35:38 alias_rpc -- common/autotest_common.sh@971 -- # kill 58000 00:07:08.462 10:35:38 alias_rpc -- common/autotest_common.sh@976 -- # wait 58000 00:07:10.995 00:07:10.995 real 0m3.972s 00:07:10.995 user 0m4.246s 00:07:10.995 sys 0m0.489s 00:07:10.995 ************************************ 00:07:10.995 END TEST alias_rpc 00:07:10.995 ************************************ 00:07:10.995 10:35:40 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:10.995 10:35:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.995 10:35:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:10.995 10:35:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:10.995 10:35:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:10.995 10:35:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.995 10:35:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.995 ************************************ 00:07:10.996 START TEST spdkcli_tcp 00:07:10.996 ************************************ 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:10.996 * Looking for test storage... 00:07:10.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.996 10:35:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.996 --rc genhtml_branch_coverage=1 00:07:10.996 --rc genhtml_function_coverage=1 00:07:10.996 --rc genhtml_legend=1 00:07:10.996 --rc geninfo_all_blocks=1 00:07:10.996 --rc geninfo_unexecuted_blocks=1 00:07:10.996 00:07:10.996 ' 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.996 --rc genhtml_branch_coverage=1 00:07:10.996 --rc genhtml_function_coverage=1 00:07:10.996 --rc genhtml_legend=1 00:07:10.996 --rc geninfo_all_blocks=1 00:07:10.996 --rc geninfo_unexecuted_blocks=1 00:07:10.996 00:07:10.996 ' 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.996 --rc genhtml_branch_coverage=1 00:07:10.996 --rc genhtml_function_coverage=1 00:07:10.996 --rc genhtml_legend=1 00:07:10.996 --rc geninfo_all_blocks=1 00:07:10.996 --rc geninfo_unexecuted_blocks=1 00:07:10.996 00:07:10.996 ' 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.996 --rc genhtml_branch_coverage=1 00:07:10.996 --rc genhtml_function_coverage=1 00:07:10.996 --rc genhtml_legend=1 00:07:10.996 --rc geninfo_all_blocks=1 00:07:10.996 --rc geninfo_unexecuted_blocks=1 00:07:10.996 00:07:10.996 ' 00:07:10.996 10:35:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:10.996 10:35:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:10.996 10:35:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:10.996 10:35:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:10.996 10:35:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:10.996 10:35:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:10.996 10:35:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.996 10:35:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58106 00:07:10.996 10:35:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58106 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58106 ']' 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.996 10:35:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.996 10:35:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.996 [2024-11-15 10:35:41.334828] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:10.996 [2024-11-15 10:35:41.335478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58106 ] 00:07:10.996 [2024-11-15 10:35:41.517010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.255 [2024-11-15 10:35:41.621617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.255 [2024-11-15 10:35:41.621619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.193 10:35:42 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.193 10:35:42 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:07:12.193 10:35:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58124 00:07:12.193 10:35:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:12.193 10:35:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:12.193 [ 00:07:12.193 "bdev_malloc_delete", 00:07:12.193 "bdev_malloc_create", 00:07:12.193 "bdev_null_resize", 00:07:12.193 "bdev_null_delete", 00:07:12.193 "bdev_null_create", 00:07:12.193 "bdev_nvme_cuse_unregister", 00:07:12.193 "bdev_nvme_cuse_register", 00:07:12.193 "bdev_opal_new_user", 00:07:12.193 "bdev_opal_set_lock_state", 00:07:12.193 "bdev_opal_delete", 00:07:12.193 "bdev_opal_get_info", 00:07:12.193 "bdev_opal_create", 00:07:12.193 "bdev_nvme_opal_revert", 00:07:12.193 "bdev_nvme_opal_init", 00:07:12.193 "bdev_nvme_send_cmd", 00:07:12.193 "bdev_nvme_set_keys", 00:07:12.193 "bdev_nvme_get_path_iostat", 00:07:12.193 "bdev_nvme_get_mdns_discovery_info", 00:07:12.193 "bdev_nvme_stop_mdns_discovery", 00:07:12.193 "bdev_nvme_start_mdns_discovery", 00:07:12.193 "bdev_nvme_set_multipath_policy", 00:07:12.193 "bdev_nvme_set_preferred_path", 00:07:12.193 "bdev_nvme_get_io_paths", 00:07:12.193 "bdev_nvme_remove_error_injection", 00:07:12.193 "bdev_nvme_add_error_injection", 00:07:12.193 "bdev_nvme_get_discovery_info", 00:07:12.193 "bdev_nvme_stop_discovery", 00:07:12.193 "bdev_nvme_start_discovery", 00:07:12.193 "bdev_nvme_get_controller_health_info", 00:07:12.193 "bdev_nvme_disable_controller", 00:07:12.193 "bdev_nvme_enable_controller", 00:07:12.193 "bdev_nvme_reset_controller", 00:07:12.193 "bdev_nvme_get_transport_statistics", 00:07:12.193 "bdev_nvme_apply_firmware", 00:07:12.193 "bdev_nvme_detach_controller", 00:07:12.193 "bdev_nvme_get_controllers", 00:07:12.193 "bdev_nvme_attach_controller", 00:07:12.193 "bdev_nvme_set_hotplug", 00:07:12.193 "bdev_nvme_set_options", 00:07:12.193 "bdev_passthru_delete", 00:07:12.193 "bdev_passthru_create", 00:07:12.193 "bdev_lvol_set_parent_bdev", 00:07:12.193 "bdev_lvol_set_parent", 00:07:12.193 "bdev_lvol_check_shallow_copy", 00:07:12.193 "bdev_lvol_start_shallow_copy", 00:07:12.193 "bdev_lvol_grow_lvstore", 00:07:12.193 "bdev_lvol_get_lvols", 00:07:12.193 "bdev_lvol_get_lvstores", 00:07:12.193 "bdev_lvol_delete", 00:07:12.193 "bdev_lvol_set_read_only", 00:07:12.193 "bdev_lvol_resize", 00:07:12.193 "bdev_lvol_decouple_parent", 00:07:12.193 "bdev_lvol_inflate", 00:07:12.193 "bdev_lvol_rename", 00:07:12.193 "bdev_lvol_clone_bdev", 00:07:12.193 "bdev_lvol_clone", 00:07:12.193 "bdev_lvol_snapshot", 00:07:12.193 "bdev_lvol_create", 00:07:12.193 "bdev_lvol_delete_lvstore", 00:07:12.193 "bdev_lvol_rename_lvstore", 00:07:12.193 "bdev_lvol_create_lvstore", 00:07:12.193 "bdev_raid_set_options", 00:07:12.193 "bdev_raid_remove_base_bdev", 00:07:12.194 "bdev_raid_add_base_bdev", 00:07:12.194 "bdev_raid_delete", 00:07:12.194 "bdev_raid_create", 00:07:12.194 "bdev_raid_get_bdevs", 00:07:12.194 "bdev_error_inject_error", 00:07:12.194 "bdev_error_delete", 00:07:12.194 "bdev_error_create", 00:07:12.194 "bdev_split_delete", 00:07:12.194 "bdev_split_create", 00:07:12.194 "bdev_delay_delete", 00:07:12.194 "bdev_delay_create", 00:07:12.194 "bdev_delay_update_latency", 00:07:12.194 "bdev_zone_block_delete", 00:07:12.194 "bdev_zone_block_create", 00:07:12.194 "blobfs_create", 00:07:12.194 "blobfs_detect", 00:07:12.194 "blobfs_set_cache_size", 00:07:12.194 "bdev_aio_delete", 00:07:12.194 "bdev_aio_rescan", 00:07:12.194 "bdev_aio_create", 00:07:12.194 "bdev_ftl_set_property", 00:07:12.194 "bdev_ftl_get_properties", 00:07:12.194 "bdev_ftl_get_stats", 00:07:12.194 "bdev_ftl_unmap", 00:07:12.194 "bdev_ftl_unload", 00:07:12.194 "bdev_ftl_delete", 00:07:12.194 "bdev_ftl_load", 00:07:12.194 "bdev_ftl_create", 00:07:12.194 "bdev_virtio_attach_controller", 00:07:12.194 "bdev_virtio_scsi_get_devices", 00:07:12.194 "bdev_virtio_detach_controller", 00:07:12.194 "bdev_virtio_blk_set_hotplug", 00:07:12.194 "bdev_iscsi_delete", 00:07:12.194 "bdev_iscsi_create", 00:07:12.194 "bdev_iscsi_set_options", 00:07:12.194 "accel_error_inject_error", 00:07:12.194 "ioat_scan_accel_module", 00:07:12.194 "dsa_scan_accel_module", 00:07:12.194 "iaa_scan_accel_module", 00:07:12.194 "keyring_file_remove_key", 00:07:12.194 "keyring_file_add_key", 00:07:12.194 "keyring_linux_set_options", 00:07:12.194 "fsdev_aio_delete", 00:07:12.194 "fsdev_aio_create", 00:07:12.194 "iscsi_get_histogram", 00:07:12.194 "iscsi_enable_histogram", 00:07:12.194 "iscsi_set_options", 00:07:12.194 "iscsi_get_auth_groups", 00:07:12.194 "iscsi_auth_group_remove_secret", 00:07:12.194 "iscsi_auth_group_add_secret", 00:07:12.194 "iscsi_delete_auth_group", 00:07:12.194 "iscsi_create_auth_group", 00:07:12.194 "iscsi_set_discovery_auth", 00:07:12.194 "iscsi_get_options", 00:07:12.194 "iscsi_target_node_request_logout", 00:07:12.194 "iscsi_target_node_set_redirect", 00:07:12.194 "iscsi_target_node_set_auth", 00:07:12.194 "iscsi_target_node_add_lun", 00:07:12.194 "iscsi_get_stats", 00:07:12.194 "iscsi_get_connections", 00:07:12.194 "iscsi_portal_group_set_auth", 00:07:12.194 "iscsi_start_portal_group", 00:07:12.194 "iscsi_delete_portal_group", 00:07:12.194 "iscsi_create_portal_group", 00:07:12.194 "iscsi_get_portal_groups", 00:07:12.194 "iscsi_delete_target_node", 00:07:12.194 "iscsi_target_node_remove_pg_ig_maps", 00:07:12.194 "iscsi_target_node_add_pg_ig_maps", 00:07:12.194 "iscsi_create_target_node", 00:07:12.194 "iscsi_get_target_nodes", 00:07:12.194 "iscsi_delete_initiator_group", 00:07:12.194 "iscsi_initiator_group_remove_initiators", 00:07:12.194 "iscsi_initiator_group_add_initiators", 00:07:12.194 "iscsi_create_initiator_group", 00:07:12.194 "iscsi_get_initiator_groups", 00:07:12.194 "nvmf_set_crdt", 00:07:12.194 "nvmf_set_config", 00:07:12.194 "nvmf_set_max_subsystems", 00:07:12.194 "nvmf_stop_mdns_prr", 00:07:12.194 "nvmf_publish_mdns_prr", 00:07:12.194 "nvmf_subsystem_get_listeners", 00:07:12.194 "nvmf_subsystem_get_qpairs", 00:07:12.194 "nvmf_subsystem_get_controllers", 00:07:12.194 "nvmf_get_stats", 00:07:12.194 "nvmf_get_transports", 00:07:12.194 "nvmf_create_transport", 00:07:12.194 "nvmf_get_targets", 00:07:12.194 "nvmf_delete_target", 00:07:12.194 "nvmf_create_target", 00:07:12.194 "nvmf_subsystem_allow_any_host", 00:07:12.194 "nvmf_subsystem_set_keys", 00:07:12.194 "nvmf_subsystem_remove_host", 00:07:12.194 "nvmf_subsystem_add_host", 00:07:12.194 "nvmf_ns_remove_host", 00:07:12.194 "nvmf_ns_add_host", 00:07:12.194 "nvmf_subsystem_remove_ns", 00:07:12.194 "nvmf_subsystem_set_ns_ana_group", 00:07:12.194 "nvmf_subsystem_add_ns", 00:07:12.194 "nvmf_subsystem_listener_set_ana_state", 00:07:12.194 "nvmf_discovery_get_referrals", 00:07:12.194 "nvmf_discovery_remove_referral", 00:07:12.194 "nvmf_discovery_add_referral", 00:07:12.194 "nvmf_subsystem_remove_listener", 00:07:12.194 "nvmf_subsystem_add_listener", 00:07:12.194 "nvmf_delete_subsystem", 00:07:12.194 "nvmf_create_subsystem", 00:07:12.194 "nvmf_get_subsystems", 00:07:12.194 "env_dpdk_get_mem_stats", 00:07:12.194 "nbd_get_disks", 00:07:12.194 "nbd_stop_disk", 00:07:12.194 "nbd_start_disk", 00:07:12.194 "ublk_recover_disk", 00:07:12.194 "ublk_get_disks", 00:07:12.194 "ublk_stop_disk", 00:07:12.194 "ublk_start_disk", 00:07:12.194 "ublk_destroy_target", 00:07:12.194 "ublk_create_target", 00:07:12.194 "virtio_blk_create_transport", 00:07:12.194 "virtio_blk_get_transports", 00:07:12.194 "vhost_controller_set_coalescing", 00:07:12.194 "vhost_get_controllers", 00:07:12.194 "vhost_delete_controller", 00:07:12.194 "vhost_create_blk_controller", 00:07:12.194 "vhost_scsi_controller_remove_target", 00:07:12.194 "vhost_scsi_controller_add_target", 00:07:12.194 "vhost_start_scsi_controller", 00:07:12.194 "vhost_create_scsi_controller", 00:07:12.194 "thread_set_cpumask", 00:07:12.194 "scheduler_set_options", 00:07:12.194 "framework_get_governor", 00:07:12.194 "framework_get_scheduler", 00:07:12.194 "framework_set_scheduler", 00:07:12.194 "framework_get_reactors", 00:07:12.194 "thread_get_io_channels", 00:07:12.194 "thread_get_pollers", 00:07:12.194 "thread_get_stats", 00:07:12.194 "framework_monitor_context_switch", 00:07:12.194 "spdk_kill_instance", 00:07:12.194 "log_enable_timestamps", 00:07:12.194 "log_get_flags", 00:07:12.194 "log_clear_flag", 00:07:12.194 "log_set_flag", 00:07:12.194 "log_get_level", 00:07:12.194 "log_set_level", 00:07:12.194 "log_get_print_level", 00:07:12.194 "log_set_print_level", 00:07:12.194 "framework_enable_cpumask_locks", 00:07:12.194 "framework_disable_cpumask_locks", 00:07:12.194 "framework_wait_init", 00:07:12.194 "framework_start_init", 00:07:12.194 "scsi_get_devices", 00:07:12.194 "bdev_get_histogram", 00:07:12.194 "bdev_enable_histogram", 00:07:12.194 "bdev_set_qos_limit", 00:07:12.194 "bdev_set_qd_sampling_period", 00:07:12.194 "bdev_get_bdevs", 00:07:12.194 "bdev_reset_iostat", 00:07:12.194 "bdev_get_iostat", 00:07:12.194 "bdev_examine", 00:07:12.194 "bdev_wait_for_examine", 00:07:12.195 "bdev_set_options", 00:07:12.195 "accel_get_stats", 00:07:12.195 "accel_set_options", 00:07:12.195 "accel_set_driver", 00:07:12.195 "accel_crypto_key_destroy", 00:07:12.195 "accel_crypto_keys_get", 00:07:12.195 "accel_crypto_key_create", 00:07:12.195 "accel_assign_opc", 00:07:12.195 "accel_get_module_info", 00:07:12.195 "accel_get_opc_assignments", 00:07:12.195 "vmd_rescan", 00:07:12.195 "vmd_remove_device", 00:07:12.195 "vmd_enable", 00:07:12.195 "sock_get_default_impl", 00:07:12.195 "sock_set_default_impl", 00:07:12.195 "sock_impl_set_options", 00:07:12.195 "sock_impl_get_options", 00:07:12.195 "iobuf_get_stats", 00:07:12.195 "iobuf_set_options", 00:07:12.195 "keyring_get_keys", 00:07:12.195 "framework_get_pci_devices", 00:07:12.195 "framework_get_config", 00:07:12.195 "framework_get_subsystems", 00:07:12.195 "fsdev_set_opts", 00:07:12.195 "fsdev_get_opts", 00:07:12.195 "trace_get_info", 00:07:12.195 "trace_get_tpoint_group_mask", 00:07:12.195 "trace_disable_tpoint_group", 00:07:12.195 "trace_enable_tpoint_group", 00:07:12.195 "trace_clear_tpoint_mask", 00:07:12.195 "trace_set_tpoint_mask", 00:07:12.195 "notify_get_notifications", 00:07:12.195 "notify_get_types", 00:07:12.195 "spdk_get_version", 00:07:12.195 "rpc_get_methods" 00:07:12.195 ] 00:07:12.195 10:35:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:12.195 10:35:42 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:12.195 10:35:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.195 10:35:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:12.195 10:35:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58106 00:07:12.195 10:35:42 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58106 ']' 00:07:12.195 10:35:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58106 00:07:12.195 10:35:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:07:12.195 10:35:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:12.195 10:35:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58106 00:07:12.455 killing process with pid 58106 00:07:12.455 10:35:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:12.455 10:35:42 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:12.455 10:35:42 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58106' 00:07:12.455 10:35:42 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58106 00:07:12.455 10:35:42 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58106 00:07:14.356 ************************************ 00:07:14.356 END TEST spdkcli_tcp 00:07:14.356 ************************************ 00:07:14.356 00:07:14.357 real 0m3.818s 00:07:14.357 user 0m7.055s 00:07:14.357 sys 0m0.523s 00:07:14.357 10:35:44 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:14.357 10:35:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.357 10:35:44 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:14.357 10:35:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:14.357 10:35:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:14.357 10:35:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.357 ************************************ 00:07:14.357 START TEST dpdk_mem_utility 00:07:14.357 ************************************ 00:07:14.357 10:35:44 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:14.615 * Looking for test storage... 00:07:14.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:14.615 10:35:44 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:14.615 10:35:44 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:14.615 10:35:44 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.615 10:35:45 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:14.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.615 --rc genhtml_branch_coverage=1 00:07:14.615 --rc genhtml_function_coverage=1 00:07:14.615 --rc genhtml_legend=1 00:07:14.615 --rc geninfo_all_blocks=1 00:07:14.615 --rc geninfo_unexecuted_blocks=1 00:07:14.615 00:07:14.615 ' 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:14.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.615 --rc genhtml_branch_coverage=1 00:07:14.615 --rc genhtml_function_coverage=1 00:07:14.615 --rc genhtml_legend=1 00:07:14.615 --rc geninfo_all_blocks=1 00:07:14.615 --rc geninfo_unexecuted_blocks=1 00:07:14.615 00:07:14.615 ' 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:14.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.615 --rc genhtml_branch_coverage=1 00:07:14.615 --rc genhtml_function_coverage=1 00:07:14.615 --rc genhtml_legend=1 00:07:14.615 --rc geninfo_all_blocks=1 00:07:14.615 --rc geninfo_unexecuted_blocks=1 00:07:14.615 00:07:14.615 ' 00:07:14.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:14.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.615 --rc genhtml_branch_coverage=1 00:07:14.615 --rc genhtml_function_coverage=1 00:07:14.615 --rc genhtml_legend=1 00:07:14.615 --rc geninfo_all_blocks=1 00:07:14.615 --rc geninfo_unexecuted_blocks=1 00:07:14.615 00:07:14.615 ' 00:07:14.615 10:35:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:14.615 10:35:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58218 00:07:14.615 10:35:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:14.615 10:35:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58218 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58218 ']' 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:14.615 10:35:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:14.872 [2024-11-15 10:35:45.190586] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:14.872 [2024-11-15 10:35:45.190919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58218 ] 00:07:14.872 [2024-11-15 10:35:45.360483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.129 [2024-11-15 10:35:45.465252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.695 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.695 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:07:15.695 10:35:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:15.695 10:35:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:15.695 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.695 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:15.695 { 00:07:15.695 "filename": "/tmp/spdk_mem_dump.txt" 00:07:15.695 } 00:07:15.695 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.695 10:35:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:15.955 DPDK memory size 824.000000 MiB in 1 heap(s) 00:07:15.955 1 heaps totaling size 824.000000 MiB 00:07:15.955 size: 824.000000 MiB heap id: 0 00:07:15.955 end heaps---------- 00:07:15.955 9 mempools totaling size 603.782043 MiB 00:07:15.955 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:15.955 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:15.955 size: 100.555481 MiB name: bdev_io_58218 00:07:15.955 size: 50.003479 MiB name: msgpool_58218 00:07:15.955 size: 36.509338 MiB name: fsdev_io_58218 00:07:15.955 size: 21.763794 MiB name: PDU_Pool 00:07:15.955 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:15.955 size: 4.133484 MiB name: evtpool_58218 00:07:15.955 size: 0.026123 MiB name: Session_Pool 00:07:15.955 end mempools------- 00:07:15.955 6 memzones totaling size 4.142822 MiB 00:07:15.955 size: 1.000366 MiB name: RG_ring_0_58218 00:07:15.955 size: 1.000366 MiB name: RG_ring_1_58218 00:07:15.955 size: 1.000366 MiB name: RG_ring_4_58218 00:07:15.955 size: 1.000366 MiB name: RG_ring_5_58218 00:07:15.955 size: 0.125366 MiB name: RG_ring_2_58218 00:07:15.955 size: 0.015991 MiB name: RG_ring_3_58218 00:07:15.955 end memzones------- 00:07:15.955 10:35:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:15.955 heap id: 0 total size: 824.000000 MiB number of busy elements: 307 number of free elements: 18 00:07:15.955 list of free elements. size: 16.783325 MiB 00:07:15.955 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:15.955 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:15.955 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:15.955 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:15.955 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:15.955 element at address: 0x200019a00000 with size: 0.999084 MiB 00:07:15.955 element at address: 0x200032600000 with size: 0.994324 MiB 00:07:15.955 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:15.955 element at address: 0x200019200000 with size: 0.959656 MiB 00:07:15.955 element at address: 0x200019d00040 with size: 0.936401 MiB 00:07:15.955 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:15.955 element at address: 0x20001b400000 with size: 0.564148 MiB 00:07:15.955 element at address: 0x200000c00000 with size: 0.489197 MiB 00:07:15.955 element at address: 0x200019600000 with size: 0.488708 MiB 00:07:15.955 element at address: 0x200019e00000 with size: 0.485413 MiB 00:07:15.955 element at address: 0x200012c00000 with size: 0.433228 MiB 00:07:15.955 element at address: 0x200028800000 with size: 0.390442 MiB 00:07:15.956 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:15.956 list of standard malloc elements. size: 199.285767 MiB 00:07:15.956 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:15.956 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:15.956 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:15.956 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:15.956 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:07:15.956 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:15.956 element at address: 0x200019deff40 with size: 0.062683 MiB 00:07:15.956 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:15.956 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:15.956 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:07:15.956 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:15.956 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200019affc40 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:07:15.956 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:07:15.957 element at address: 0x200028863f40 with size: 0.000244 MiB 00:07:15.957 element at address: 0x200028864040 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886af80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886b080 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886b180 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886b280 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886b380 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886b480 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886b580 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886b680 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886b780 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886b880 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886b980 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886be80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886c080 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886c180 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886c280 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886c380 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886c480 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886c580 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886c680 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886c780 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886c880 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886c980 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886d080 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886d180 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886d280 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886d380 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886d480 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886d580 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886d680 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886d780 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886d880 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886d980 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886da80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886db80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886de80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886df80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886e080 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886e180 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886e280 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886e380 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886e480 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886e580 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886e680 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886e780 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886e880 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886e980 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886f080 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886f180 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886f280 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886f380 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886f480 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886f580 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886f680 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886f780 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886f880 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886f980 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:07:15.957 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:07:15.958 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:07:15.958 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:07:15.958 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:07:15.958 list of memzone associated elements. size: 607.930908 MiB 00:07:15.958 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:07:15.958 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:15.958 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:07:15.958 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:15.958 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:07:15.958 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58218_0 00:07:15.958 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:15.958 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58218_0 00:07:15.958 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:15.958 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58218_0 00:07:15.958 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:07:15.958 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:15.958 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:07:15.958 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:15.958 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:15.958 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58218_0 00:07:15.958 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:15.958 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58218 00:07:15.958 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:15.958 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58218 00:07:15.958 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:07:15.958 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:15.958 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:07:15.958 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:15.958 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:15.958 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:15.958 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:07:15.958 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:15.958 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:15.958 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58218 00:07:15.958 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:15.958 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58218 00:07:15.958 element at address: 0x200019affd40 with size: 1.000549 MiB 00:07:15.958 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58218 00:07:15.958 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:07:15.958 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58218 00:07:15.958 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:15.958 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58218 00:07:15.958 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:15.958 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58218 00:07:15.958 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:07:15.958 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:15.958 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:07:15.958 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:15.958 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:07:15.958 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:15.958 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:15.958 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58218 00:07:15.958 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:15.958 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58218 00:07:15.958 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:07:15.958 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:15.958 element at address: 0x200028864140 with size: 0.023804 MiB 00:07:15.958 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:15.958 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:15.958 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58218 00:07:15.958 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:07:15.958 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:15.958 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:15.958 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58218 00:07:15.958 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:15.958 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58218 00:07:15.958 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:15.958 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58218 00:07:15.958 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:07:15.958 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:15.958 10:35:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:15.958 10:35:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58218 00:07:15.958 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58218 ']' 00:07:15.958 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58218 00:07:15.958 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:07:15.958 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:15.958 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58218 00:07:15.958 killing process with pid 58218 00:07:15.958 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:15.958 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:15.958 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58218' 00:07:15.958 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58218 00:07:15.958 10:35:46 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58218 00:07:18.494 00:07:18.494 real 0m3.637s 00:07:18.494 user 0m3.842s 00:07:18.494 sys 0m0.471s 00:07:18.494 ************************************ 00:07:18.494 END TEST dpdk_mem_utility 00:07:18.494 ************************************ 00:07:18.494 10:35:48 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.494 10:35:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:18.494 10:35:48 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:18.494 10:35:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.494 10:35:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.494 10:35:48 -- common/autotest_common.sh@10 -- # set +x 00:07:18.494 ************************************ 00:07:18.494 START TEST event 00:07:18.494 ************************************ 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:18.494 * Looking for test storage... 00:07:18.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.494 10:35:48 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.494 10:35:48 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.494 10:35:48 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.494 10:35:48 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.494 10:35:48 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.494 10:35:48 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.494 10:35:48 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.494 10:35:48 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.494 10:35:48 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.494 10:35:48 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.494 10:35:48 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.494 10:35:48 event -- scripts/common.sh@344 -- # case "$op" in 00:07:18.494 10:35:48 event -- scripts/common.sh@345 -- # : 1 00:07:18.494 10:35:48 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.494 10:35:48 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.494 10:35:48 event -- scripts/common.sh@365 -- # decimal 1 00:07:18.494 10:35:48 event -- scripts/common.sh@353 -- # local d=1 00:07:18.494 10:35:48 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.494 10:35:48 event -- scripts/common.sh@355 -- # echo 1 00:07:18.494 10:35:48 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.494 10:35:48 event -- scripts/common.sh@366 -- # decimal 2 00:07:18.494 10:35:48 event -- scripts/common.sh@353 -- # local d=2 00:07:18.494 10:35:48 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.494 10:35:48 event -- scripts/common.sh@355 -- # echo 2 00:07:18.494 10:35:48 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.494 10:35:48 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.494 10:35:48 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.494 10:35:48 event -- scripts/common.sh@368 -- # return 0 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.494 --rc genhtml_branch_coverage=1 00:07:18.494 --rc genhtml_function_coverage=1 00:07:18.494 --rc genhtml_legend=1 00:07:18.494 --rc geninfo_all_blocks=1 00:07:18.494 --rc geninfo_unexecuted_blocks=1 00:07:18.494 00:07:18.494 ' 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.494 --rc genhtml_branch_coverage=1 00:07:18.494 --rc genhtml_function_coverage=1 00:07:18.494 --rc genhtml_legend=1 00:07:18.494 --rc geninfo_all_blocks=1 00:07:18.494 --rc geninfo_unexecuted_blocks=1 00:07:18.494 00:07:18.494 ' 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.494 --rc genhtml_branch_coverage=1 00:07:18.494 --rc genhtml_function_coverage=1 00:07:18.494 --rc genhtml_legend=1 00:07:18.494 --rc geninfo_all_blocks=1 00:07:18.494 --rc geninfo_unexecuted_blocks=1 00:07:18.494 00:07:18.494 ' 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.494 --rc genhtml_branch_coverage=1 00:07:18.494 --rc genhtml_function_coverage=1 00:07:18.494 --rc genhtml_legend=1 00:07:18.494 --rc geninfo_all_blocks=1 00:07:18.494 --rc geninfo_unexecuted_blocks=1 00:07:18.494 00:07:18.494 ' 00:07:18.494 10:35:48 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:18.494 10:35:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:18.494 10:35:48 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:18.494 10:35:48 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:18.495 10:35:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.495 10:35:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.495 ************************************ 00:07:18.495 START TEST event_perf 00:07:18.495 ************************************ 00:07:18.495 10:35:48 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:18.495 Running I/O for 1 seconds...[2024-11-15 10:35:48.779241] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:18.495 [2024-11-15 10:35:48.779529] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58326 ] 00:07:18.495 [2024-11-15 10:35:49.012130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.759 [2024-11-15 10:35:49.154361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.759 [2024-11-15 10:35:49.154475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.759 [2024-11-15 10:35:49.154543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.759 Running I/O for 1 seconds...[2024-11-15 10:35:49.154543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.153 00:07:20.153 lcore 0: 179697 00:07:20.153 lcore 1: 179695 00:07:20.153 lcore 2: 179696 00:07:20.153 lcore 3: 179696 00:07:20.153 done. 00:07:20.153 00:07:20.153 real 0m1.657s 00:07:20.153 user 0m4.420s 00:07:20.153 ************************************ 00:07:20.153 END TEST event_perf 00:07:20.153 ************************************ 00:07:20.153 sys 0m0.105s 00:07:20.153 10:35:50 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:20.153 10:35:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.153 10:35:50 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:20.153 10:35:50 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:20.153 10:35:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:20.153 10:35:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.153 ************************************ 00:07:20.153 START TEST event_reactor 00:07:20.153 ************************************ 00:07:20.153 10:35:50 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:20.153 [2024-11-15 10:35:50.476966] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:20.153 [2024-11-15 10:35:50.477098] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58366 ] 00:07:20.153 [2024-11-15 10:35:50.648511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.411 [2024-11-15 10:35:50.752071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.785 test_start 00:07:21.785 oneshot 00:07:21.785 tick 100 00:07:21.785 tick 100 00:07:21.785 tick 250 00:07:21.785 tick 100 00:07:21.785 tick 100 00:07:21.785 tick 100 00:07:21.785 tick 250 00:07:21.785 tick 500 00:07:21.785 tick 100 00:07:21.785 tick 100 00:07:21.785 tick 250 00:07:21.785 tick 100 00:07:21.785 tick 100 00:07:21.785 test_end 00:07:21.785 00:07:21.785 real 0m1.517s 00:07:21.785 user 0m1.333s 00:07:21.785 sys 0m0.074s 00:07:21.785 10:35:51 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.785 10:35:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:21.785 ************************************ 00:07:21.785 END TEST event_reactor 00:07:21.785 ************************************ 00:07:21.785 10:35:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:21.785 10:35:51 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:21.785 10:35:51 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.785 10:35:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.785 ************************************ 00:07:21.785 START TEST event_reactor_perf 00:07:21.785 ************************************ 00:07:21.785 10:35:51 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:21.785 [2024-11-15 10:35:52.044514] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:21.785 [2024-11-15 10:35:52.044661] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58402 ] 00:07:21.785 [2024-11-15 10:35:52.267039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.043 [2024-11-15 10:35:52.369842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.419 test_start 00:07:23.419 test_end 00:07:23.419 Performance: 276255 events per second 00:07:23.419 00:07:23.419 real 0m1.589s 00:07:23.419 user 0m1.395s 00:07:23.419 sys 0m0.084s 00:07:23.419 10:35:53 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:23.419 10:35:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.419 ************************************ 00:07:23.419 END TEST event_reactor_perf 00:07:23.419 ************************************ 00:07:23.419 10:35:53 event -- event/event.sh@49 -- # uname -s 00:07:23.419 10:35:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:23.419 10:35:53 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:23.419 10:35:53 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:23.419 10:35:53 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.419 10:35:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.419 ************************************ 00:07:23.419 START TEST event_scheduler 00:07:23.419 ************************************ 00:07:23.419 10:35:53 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:23.419 * Looking for test storage... 00:07:23.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:23.419 10:35:53 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:23.419 10:35:53 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:07:23.419 10:35:53 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:23.419 10:35:53 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.419 10:35:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:23.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.420 10:35:53 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.420 10:35:53 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.420 10:35:53 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.420 10:35:53 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:23.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.420 --rc genhtml_branch_coverage=1 00:07:23.420 --rc genhtml_function_coverage=1 00:07:23.420 --rc genhtml_legend=1 00:07:23.420 --rc geninfo_all_blocks=1 00:07:23.420 --rc geninfo_unexecuted_blocks=1 00:07:23.420 00:07:23.420 ' 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:23.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.420 --rc genhtml_branch_coverage=1 00:07:23.420 --rc genhtml_function_coverage=1 00:07:23.420 --rc genhtml_legend=1 00:07:23.420 --rc geninfo_all_blocks=1 00:07:23.420 --rc geninfo_unexecuted_blocks=1 00:07:23.420 00:07:23.420 ' 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:23.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.420 --rc genhtml_branch_coverage=1 00:07:23.420 --rc genhtml_function_coverage=1 00:07:23.420 --rc genhtml_legend=1 00:07:23.420 --rc geninfo_all_blocks=1 00:07:23.420 --rc geninfo_unexecuted_blocks=1 00:07:23.420 00:07:23.420 ' 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:23.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.420 --rc genhtml_branch_coverage=1 00:07:23.420 --rc genhtml_function_coverage=1 00:07:23.420 --rc genhtml_legend=1 00:07:23.420 --rc geninfo_all_blocks=1 00:07:23.420 --rc geninfo_unexecuted_blocks=1 00:07:23.420 00:07:23.420 ' 00:07:23.420 10:35:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:23.420 10:35:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58478 00:07:23.420 10:35:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.420 10:35:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:23.420 10:35:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58478 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58478 ']' 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.420 10:35:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:23.420 [2024-11-15 10:35:53.899024] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:23.420 [2024-11-15 10:35:53.899422] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58478 ] 00:07:23.677 [2024-11-15 10:35:54.077522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.677 [2024-11-15 10:35:54.191955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.677 [2024-11-15 10:35:54.192022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.677 [2024-11-15 10:35:54.192091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.677 [2024-11-15 10:35:54.192102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.611 10:35:54 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.611 10:35:54 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:07:24.611 10:35:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:24.611 10:35:54 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.612 10:35:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.612 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.612 POWER: Cannot set governor of lcore 0 to userspace 00:07:24.612 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.612 POWER: Cannot set governor of lcore 0 to performance 00:07:24.612 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.612 POWER: Cannot set governor of lcore 0 to userspace 00:07:24.612 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:24.612 POWER: Cannot set governor of lcore 0 to userspace 00:07:24.612 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:24.612 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:24.612 POWER: Unable to set Power Management Environment for lcore 0 00:07:24.612 [2024-11-15 10:35:54.912088] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:24.612 [2024-11-15 10:35:54.912210] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:24.612 [2024-11-15 10:35:54.912263] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:24.612 [2024-11-15 10:35:54.912431] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:24.612 [2024-11-15 10:35:54.912541] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:24.612 [2024-11-15 10:35:54.912644] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:24.612 10:35:54 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.612 10:35:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:24.612 10:35:54 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.612 10:35:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.870 [2024-11-15 10:35:55.205365] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:24.871 10:35:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:24.871 10:35:55 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:24.871 10:35:55 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 ************************************ 00:07:24.871 START TEST scheduler_create_thread 00:07:24.871 ************************************ 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 2 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 3 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 4 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 5 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 6 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 7 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 8 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 9 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 10 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.871 10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.248 10:35:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.248 10:35:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:26.248 10:35:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:26.248 10:35:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.248 10:35:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.638 ************************************ 00:07:27.638 END TEST scheduler_create_thread 00:07:27.638 ************************************ 00:07:27.638 10:35:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.638 00:07:27.638 real 0m2.622s 00:07:27.638 user 0m0.016s 00:07:27.638 sys 0m0.009s 00:07:27.638 10:35:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:27.638 10:35:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.638 10:35:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:27.638 10:35:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58478 00:07:27.638 10:35:57 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58478 ']' 00:07:27.638 10:35:57 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58478 00:07:27.638 10:35:57 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:07:27.638 10:35:57 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:27.638 10:35:57 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58478 00:07:27.638 killing process with pid 58478 00:07:27.638 10:35:57 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:27.638 10:35:57 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:27.638 10:35:57 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58478' 00:07:27.638 10:35:57 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58478 00:07:27.638 10:35:57 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58478 00:07:27.897 [2024-11-15 10:35:58.319374] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:29.274 ************************************ 00:07:29.274 END TEST event_scheduler 00:07:29.274 ************************************ 00:07:29.274 00:07:29.274 real 0m5.765s 00:07:29.274 user 0m10.393s 00:07:29.274 sys 0m0.457s 00:07:29.274 10:35:59 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:29.274 10:35:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.274 10:35:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:29.274 10:35:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:29.274 10:35:59 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:29.274 10:35:59 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.274 10:35:59 event -- common/autotest_common.sh@10 -- # set +x 00:07:29.274 ************************************ 00:07:29.274 START TEST app_repeat 00:07:29.274 ************************************ 00:07:29.274 10:35:59 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:29.274 Process app_repeat pid: 58592 00:07:29.274 spdk_app_start Round 0 00:07:29.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58592 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58592' 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:29.274 10:35:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58592 /var/tmp/spdk-nbd.sock 00:07:29.274 10:35:59 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58592 ']' 00:07:29.274 10:35:59 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:29.274 10:35:59 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:29.274 10:35:59 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:29.274 10:35:59 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:29.274 10:35:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:29.274 [2024-11-15 10:35:59.511836] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:29.274 [2024-11-15 10:35:59.512010] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58592 ] 00:07:29.274 [2024-11-15 10:35:59.690719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.533 [2024-11-15 10:35:59.834334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.533 [2024-11-15 10:35:59.834337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.100 10:36:00 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:30.100 10:36:00 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:30.100 10:36:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.666 Malloc0 00:07:30.666 10:36:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.924 Malloc1 00:07:30.924 10:36:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.924 10:36:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:31.182 /dev/nbd0 00:07:31.182 10:36:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:31.182 10:36:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.182 1+0 records in 00:07:31.182 1+0 records out 00:07:31.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343588 s, 11.9 MB/s 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:31.182 10:36:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:31.182 10:36:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.182 10:36:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.182 10:36:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:31.747 /dev/nbd1 00:07:31.747 10:36:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:31.747 10:36:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.747 1+0 records in 00:07:31.747 1+0 records out 00:07:31.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308474 s, 13.3 MB/s 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:31.747 10:36:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:31.747 10:36:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.747 10:36:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.747 10:36:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.747 10:36:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.747 10:36:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.005 { 00:07:32.005 "nbd_device": "/dev/nbd0", 00:07:32.005 "bdev_name": "Malloc0" 00:07:32.005 }, 00:07:32.005 { 00:07:32.005 "nbd_device": "/dev/nbd1", 00:07:32.005 "bdev_name": "Malloc1" 00:07:32.005 } 00:07:32.005 ]' 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.005 { 00:07:32.005 "nbd_device": "/dev/nbd0", 00:07:32.005 "bdev_name": "Malloc0" 00:07:32.005 }, 00:07:32.005 { 00:07:32.005 "nbd_device": "/dev/nbd1", 00:07:32.005 "bdev_name": "Malloc1" 00:07:32.005 } 00:07:32.005 ]' 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:32.005 /dev/nbd1' 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:32.005 /dev/nbd1' 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:32.005 256+0 records in 00:07:32.005 256+0 records out 00:07:32.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662711 s, 158 MB/s 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:32.005 256+0 records in 00:07:32.005 256+0 records out 00:07:32.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303632 s, 34.5 MB/s 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:32.005 256+0 records in 00:07:32.005 256+0 records out 00:07:32.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0364494 s, 28.8 MB/s 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.005 10:36:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.263 10:36:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.263 10:36:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.263 10:36:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.263 10:36:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.263 10:36:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.263 10:36:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.263 10:36:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.263 10:36:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.263 10:36:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.264 10:36:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.829 10:36:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.087 10:36:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.087 10:36:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:33.651 10:36:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:34.639 [2024-11-15 10:36:04.970733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.639 [2024-11-15 10:36:05.070709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.639 [2024-11-15 10:36:05.070722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.896 [2024-11-15 10:36:05.238136] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:34.896 [2024-11-15 10:36:05.238500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:36.798 spdk_app_start Round 1 00:07:36.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.798 10:36:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:36.798 10:36:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:36.798 10:36:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58592 /var/tmp/spdk-nbd.sock 00:07:36.798 10:36:06 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58592 ']' 00:07:36.798 10:36:06 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.798 10:36:06 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:36.798 10:36:06 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.798 10:36:06 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:36.798 10:36:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.798 10:36:07 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:36.798 10:36:07 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:36.798 10:36:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:37.056 Malloc0 00:07:37.056 10:36:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:37.314 Malloc1 00:07:37.572 10:36:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.572 10:36:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:37.831 /dev/nbd0 00:07:37.831 10:36:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:37.831 10:36:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:37.831 1+0 records in 00:07:37.831 1+0 records out 00:07:37.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275679 s, 14.9 MB/s 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:37.831 10:36:08 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:37.831 10:36:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.831 10:36:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.831 10:36:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:38.088 /dev/nbd1 00:07:38.088 10:36:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:38.089 10:36:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.089 1+0 records in 00:07:38.089 1+0 records out 00:07:38.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484022 s, 8.5 MB/s 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:38.089 10:36:08 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:38.089 10:36:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.089 10:36:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.089 10:36:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.089 10:36:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.089 10:36:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:38.347 { 00:07:38.347 "nbd_device": "/dev/nbd0", 00:07:38.347 "bdev_name": "Malloc0" 00:07:38.347 }, 00:07:38.347 { 00:07:38.347 "nbd_device": "/dev/nbd1", 00:07:38.347 "bdev_name": "Malloc1" 00:07:38.347 } 00:07:38.347 ]' 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:38.347 { 00:07:38.347 "nbd_device": "/dev/nbd0", 00:07:38.347 "bdev_name": "Malloc0" 00:07:38.347 }, 00:07:38.347 { 00:07:38.347 "nbd_device": "/dev/nbd1", 00:07:38.347 "bdev_name": "Malloc1" 00:07:38.347 } 00:07:38.347 ]' 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:38.347 /dev/nbd1' 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:38.347 /dev/nbd1' 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:38.347 256+0 records in 00:07:38.347 256+0 records out 00:07:38.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00668711 s, 157 MB/s 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:38.347 256+0 records in 00:07:38.347 256+0 records out 00:07:38.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256708 s, 40.8 MB/s 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.347 10:36:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:38.606 256+0 records in 00:07:38.606 256+0 records out 00:07:38.606 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349435 s, 30.0 MB/s 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.606 10:36:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:38.864 10:36:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:38.864 10:36:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:38.864 10:36:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:38.864 10:36:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.864 10:36:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.864 10:36:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:38.864 10:36:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:38.864 10:36:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.864 10:36:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.864 10:36:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:39.122 10:36:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:39.122 10:36:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:39.122 10:36:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:39.122 10:36:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.123 10:36:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.123 10:36:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:39.123 10:36:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.123 10:36:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.123 10:36:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.123 10:36:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.123 10:36:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:39.382 10:36:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:39.382 10:36:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:39.947 10:36:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:40.887 [2024-11-15 10:36:11.342998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.887 [2024-11-15 10:36:11.444015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.887 [2024-11-15 10:36:11.444017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.145 [2024-11-15 10:36:11.617157] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:41.145 [2024-11-15 10:36:11.617276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:43.045 spdk_app_start Round 2 00:07:43.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.045 10:36:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:43.045 10:36:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:43.045 10:36:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58592 /var/tmp/spdk-nbd.sock 00:07:43.045 10:36:13 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58592 ']' 00:07:43.045 10:36:13 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.045 10:36:13 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:43.045 10:36:13 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.045 10:36:13 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:43.045 10:36:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:43.304 10:36:13 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.304 10:36:13 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:43.304 10:36:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:43.562 Malloc0 00:07:43.562 10:36:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.129 Malloc1 00:07:44.129 10:36:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.129 10:36:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:44.400 /dev/nbd0 00:07:44.400 10:36:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:44.400 10:36:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:44.400 1+0 records in 00:07:44.400 1+0 records out 00:07:44.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373098 s, 11.0 MB/s 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:44.400 10:36:14 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:44.400 10:36:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.400 10:36:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.400 10:36:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:44.660 /dev/nbd1 00:07:44.660 10:36:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:44.660 10:36:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:44.660 1+0 records in 00:07:44.660 1+0 records out 00:07:44.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307976 s, 13.3 MB/s 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:44.660 10:36:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:44.660 10:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.660 10:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.660 10:36:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.660 10:36:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.660 10:36:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.918 10:36:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:44.918 { 00:07:44.918 "nbd_device": "/dev/nbd0", 00:07:44.918 "bdev_name": "Malloc0" 00:07:44.918 }, 00:07:44.918 { 00:07:44.918 "nbd_device": "/dev/nbd1", 00:07:44.918 "bdev_name": "Malloc1" 00:07:44.918 } 00:07:44.918 ]' 00:07:44.918 10:36:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:44.918 { 00:07:44.918 "nbd_device": "/dev/nbd0", 00:07:44.918 "bdev_name": "Malloc0" 00:07:44.918 }, 00:07:44.918 { 00:07:44.918 "nbd_device": "/dev/nbd1", 00:07:44.918 "bdev_name": "Malloc1" 00:07:44.918 } 00:07:44.918 ]' 00:07:44.918 10:36:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:45.176 /dev/nbd1' 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:45.176 /dev/nbd1' 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:45.176 10:36:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:45.176 256+0 records in 00:07:45.176 256+0 records out 00:07:45.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00958983 s, 109 MB/s 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:45.177 256+0 records in 00:07:45.177 256+0 records out 00:07:45.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246638 s, 42.5 MB/s 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:45.177 256+0 records in 00:07:45.177 256+0 records out 00:07:45.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0335126 s, 31.3 MB/s 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.177 10:36:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:45.435 10:36:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.435 10:36:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.435 10:36:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.435 10:36:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.435 10:36:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.435 10:36:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.435 10:36:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:45.435 10:36:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.435 10:36:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.435 10:36:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.002 10:36:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.260 10:36:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.260 10:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.260 10:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.260 10:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.260 10:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.260 10:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.261 10:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:46.261 10:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.261 10:36:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.261 10:36:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:46.261 10:36:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:46.261 10:36:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:46.261 10:36:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:46.828 10:36:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:47.764 [2024-11-15 10:36:18.134037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.764 [2024-11-15 10:36:18.235645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.764 [2024-11-15 10:36:18.235658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.023 [2024-11-15 10:36:18.404435] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:48.023 [2024-11-15 10:36:18.404521] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:49.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:49.973 10:36:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58592 /var/tmp/spdk-nbd.sock 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58592 ']' 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:49.973 10:36:20 event.app_repeat -- event/event.sh@39 -- # killprocess 58592 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58592 ']' 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58592 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58592 00:07:49.973 killing process with pid 58592 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58592' 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58592 00:07:49.973 10:36:20 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58592 00:07:50.909 spdk_app_start is called in Round 0. 00:07:50.909 Shutdown signal received, stop current app iteration 00:07:50.909 Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 reinitialization... 00:07:50.909 spdk_app_start is called in Round 1. 00:07:50.909 Shutdown signal received, stop current app iteration 00:07:50.909 Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 reinitialization... 00:07:50.909 spdk_app_start is called in Round 2. 00:07:50.909 Shutdown signal received, stop current app iteration 00:07:50.909 Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 reinitialization... 00:07:50.909 spdk_app_start is called in Round 3. 00:07:50.909 Shutdown signal received, stop current app iteration 00:07:50.909 10:36:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:50.909 10:36:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:50.909 00:07:50.909 real 0m21.891s 00:07:50.909 user 0m49.088s 00:07:50.909 sys 0m2.800s 00:07:50.909 10:36:21 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.909 10:36:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.909 ************************************ 00:07:50.909 END TEST app_repeat 00:07:50.909 ************************************ 00:07:50.909 10:36:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:50.909 10:36:21 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:50.909 10:36:21 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:50.909 10:36:21 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.909 10:36:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:50.909 ************************************ 00:07:50.909 START TEST cpu_locks 00:07:50.909 ************************************ 00:07:50.909 10:36:21 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:51.167 * Looking for test storage... 00:07:51.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:51.167 10:36:21 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.167 10:36:21 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.167 10:36:21 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.167 10:36:21 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:51.167 10:36:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.168 10:36:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.168 10:36:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.168 10:36:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:51.168 10:36:21 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.168 10:36:21 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.168 --rc genhtml_branch_coverage=1 00:07:51.168 --rc genhtml_function_coverage=1 00:07:51.168 --rc genhtml_legend=1 00:07:51.168 --rc geninfo_all_blocks=1 00:07:51.168 --rc geninfo_unexecuted_blocks=1 00:07:51.168 00:07:51.168 ' 00:07:51.168 10:36:21 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.168 --rc genhtml_branch_coverage=1 00:07:51.168 --rc genhtml_function_coverage=1 00:07:51.168 --rc genhtml_legend=1 00:07:51.168 --rc geninfo_all_blocks=1 00:07:51.168 --rc geninfo_unexecuted_blocks=1 00:07:51.168 00:07:51.168 ' 00:07:51.168 10:36:21 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.168 --rc genhtml_branch_coverage=1 00:07:51.168 --rc genhtml_function_coverage=1 00:07:51.168 --rc genhtml_legend=1 00:07:51.168 --rc geninfo_all_blocks=1 00:07:51.168 --rc geninfo_unexecuted_blocks=1 00:07:51.168 00:07:51.168 ' 00:07:51.168 10:36:21 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.168 --rc genhtml_branch_coverage=1 00:07:51.168 --rc genhtml_function_coverage=1 00:07:51.168 --rc genhtml_legend=1 00:07:51.168 --rc geninfo_all_blocks=1 00:07:51.168 --rc geninfo_unexecuted_blocks=1 00:07:51.168 00:07:51.168 ' 00:07:51.168 10:36:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:51.168 10:36:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:51.168 10:36:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:51.168 10:36:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:51.168 10:36:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.168 10:36:21 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.168 10:36:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.168 ************************************ 00:07:51.168 START TEST default_locks 00:07:51.168 ************************************ 00:07:51.168 10:36:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:07:51.168 10:36:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59066 00:07:51.168 10:36:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59066 00:07:51.168 10:36:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:51.168 10:36:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59066 ']' 00:07:51.168 10:36:21 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.168 10:36:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.168 10:36:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.168 10:36:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.168 10:36:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.168 [2024-11-15 10:36:21.720859] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:51.168 [2024-11-15 10:36:21.721071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:07:51.426 [2024-11-15 10:36:21.900997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.684 [2024-11-15 10:36:22.004628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.250 10:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:52.250 10:36:22 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:52.250 10:36:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59066 00:07:52.250 10:36:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59066 00:07:52.250 10:36:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59066 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59066 ']' 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59066 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59066 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:52.817 killing process with pid 59066 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59066' 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59066 00:07:52.817 10:36:23 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59066 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59066 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59066 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59066 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59066 ']' 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.345 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59066) - No such process 00:07:55.345 ERROR: process (pid: 59066) is no longer running 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:55.345 00:07:55.345 real 0m3.714s 00:07:55.345 user 0m3.908s 00:07:55.345 sys 0m0.602s 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.345 10:36:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.345 ************************************ 00:07:55.345 END TEST default_locks 00:07:55.345 ************************************ 00:07:55.345 10:36:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:55.345 10:36:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:55.345 10:36:25 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.345 10:36:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.345 ************************************ 00:07:55.345 START TEST default_locks_via_rpc 00:07:55.345 ************************************ 00:07:55.345 10:36:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:07:55.345 10:36:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59141 00:07:55.345 10:36:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59141 00:07:55.345 10:36:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59141 ']' 00:07:55.346 10:36:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:55.346 10:36:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.346 10:36:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.346 10:36:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.346 10:36:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.346 10:36:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.346 [2024-11-15 10:36:25.459172] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:55.346 [2024-11-15 10:36:25.459327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59141 ] 00:07:55.346 [2024-11-15 10:36:25.629882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.346 [2024-11-15 10:36:25.732417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59141 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59141 00:07:55.984 10:36:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59141 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59141 ']' 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59141 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59141 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:56.597 killing process with pid 59141 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59141' 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59141 00:07:56.597 10:36:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59141 00:07:59.125 00:07:59.125 real 0m3.883s 00:07:59.125 user 0m4.069s 00:07:59.125 sys 0m0.634s 00:07:59.125 10:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.125 10:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 ************************************ 00:07:59.125 END TEST default_locks_via_rpc 00:07:59.125 ************************************ 00:07:59.125 10:36:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:59.125 10:36:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:59.125 10:36:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.125 10:36:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 ************************************ 00:07:59.125 START TEST non_locking_app_on_locked_coremask 00:07:59.125 ************************************ 00:07:59.125 10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:07:59.125 10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59215 00:07:59.125 10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:59.125 10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59215 /var/tmp/spdk.sock 00:07:59.125 10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59215 ']' 00:07:59.125 10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.125 10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.125 10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.125 10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.125 10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 [2024-11-15 10:36:29.423983] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:07:59.125 [2024-11-15 10:36:29.424200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59215 ] 00:07:59.125 [2024-11-15 10:36:29.608179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.383 [2024-11-15 10:36:29.732293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59231 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59231 /var/tmp/spdk2.sock 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59231 ']' 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.317 10:36:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:00.317 [2024-11-15 10:36:30.647014] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:00.317 [2024-11-15 10:36:30.648020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59231 ] 00:08:00.317 [2024-11-15 10:36:30.859926] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.317 [2024-11-15 10:36:30.859997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.574 [2024-11-15 10:36:31.066846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.547 10:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:02.547 10:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:02.547 10:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59215 00:08:02.547 10:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:02.547 10:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59215 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59215 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59215 ']' 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59215 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59215 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:03.114 killing process with pid 59215 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59215' 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59215 00:08:03.114 10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59215 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59231 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59231 ']' 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59231 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59231 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:07.299 killing process with pid 59231 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59231' 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59231 00:08:07.299 10:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59231 00:08:09.828 00:08:09.828 real 0m10.527s 00:08:09.828 user 0m11.184s 00:08:09.828 sys 0m1.216s 00:08:09.828 10:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.828 10:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.828 ************************************ 00:08:09.828 END TEST non_locking_app_on_locked_coremask 00:08:09.828 ************************************ 00:08:09.828 10:36:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:09.828 10:36:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:09.828 10:36:39 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.828 10:36:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.828 ************************************ 00:08:09.828 START TEST locking_app_on_unlocked_coremask 00:08:09.828 ************************************ 00:08:09.828 10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:08:09.828 10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59366 00:08:09.828 10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:09.828 10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59366 /var/tmp/spdk.sock 00:08:09.828 10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59366 ']' 00:08:09.828 10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.828 10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.828 10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.828 10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.828 10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.828 [2024-11-15 10:36:39.981424] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:09.828 [2024-11-15 10:36:39.981584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59366 ] 00:08:09.828 [2024-11-15 10:36:40.159149] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:09.828 [2024-11-15 10:36:40.159261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.828 [2024-11-15 10:36:40.285382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59382 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59382 /var/tmp/spdk2.sock 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59382 ']' 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:10.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.764 10:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:10.764 [2024-11-15 10:36:41.195849] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:10.764 [2024-11-15 10:36:41.196057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59382 ] 00:08:11.022 [2024-11-15 10:36:41.405010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.282 [2024-11-15 10:36:41.611677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.708 10:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:12.708 10:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:12.708 10:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59382 00:08:12.708 10:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59382 00:08:12.708 10:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59366 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59366 ']' 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59366 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59366 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:13.643 killing process with pid 59366 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59366' 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59366 00:08:13.643 10:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59366 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59382 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59382 ']' 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59382 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59382 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:17.832 killing process with pid 59382 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59382' 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59382 00:08:17.832 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59382 00:08:20.361 00:08:20.361 real 0m10.526s 00:08:20.361 user 0m11.330s 00:08:20.361 sys 0m1.257s 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.361 ************************************ 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.361 END TEST locking_app_on_unlocked_coremask 00:08:20.361 ************************************ 00:08:20.361 10:36:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:20.361 10:36:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:20.361 10:36:50 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.361 10:36:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.361 ************************************ 00:08:20.361 START TEST locking_app_on_locked_coremask 00:08:20.361 ************************************ 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59523 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59523 /var/tmp/spdk.sock 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59523 ']' 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.361 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.361 [2024-11-15 10:36:50.527984] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:20.361 [2024-11-15 10:36:50.528141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59523 ] 00:08:20.361 [2024-11-15 10:36:50.700763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.361 [2024-11-15 10:36:50.806533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59539 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59539 /var/tmp/spdk2.sock 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59539 /var/tmp/spdk2.sock 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59539 /var/tmp/spdk2.sock 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59539 ']' 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.296 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.296 [2024-11-15 10:36:51.727671] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:21.296 [2024-11-15 10:36:51.727857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59539 ] 00:08:21.553 [2024-11-15 10:36:51.927180] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59523 has claimed it. 00:08:21.553 [2024-11-15 10:36:51.927264] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:22.120 ERROR: process (pid: 59539) is no longer running 00:08:22.120 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59539) - No such process 00:08:22.120 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:22.120 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:22.120 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:22.120 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.120 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.120 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.120 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59523 00:08:22.120 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59523 00:08:22.120 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59523 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59523 ']' 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59523 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59523 00:08:22.379 killing process with pid 59523 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59523' 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59523 00:08:22.379 10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59523 00:08:24.915 ************************************ 00:08:24.915 END TEST locking_app_on_locked_coremask 00:08:24.915 ************************************ 00:08:24.915 00:08:24.915 real 0m4.553s 00:08:24.915 user 0m5.157s 00:08:24.915 sys 0m0.738s 00:08:24.915 10:36:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.915 10:36:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.915 10:36:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:24.915 10:36:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:24.915 10:36:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.915 10:36:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:24.915 ************************************ 00:08:24.915 START TEST locking_overlapped_coremask 00:08:24.915 ************************************ 00:08:24.915 10:36:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:08:24.915 10:36:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59603 00:08:24.915 10:36:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:24.915 10:36:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59603 /var/tmp/spdk.sock 00:08:24.915 10:36:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59603 ']' 00:08:24.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.915 10:36:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.915 10:36:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.916 10:36:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.916 10:36:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.916 10:36:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.916 [2024-11-15 10:36:55.131326] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:24.916 [2024-11-15 10:36:55.131504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59603 ] 00:08:24.916 [2024-11-15 10:36:55.311092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:24.916 [2024-11-15 10:36:55.444851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.916 [2024-11-15 10:36:55.445304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.916 [2024-11-15 10:36:55.445317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59621 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59621 /var/tmp/spdk2.sock 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59621 /var/tmp/spdk2.sock 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:25.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59621 /var/tmp/spdk2.sock 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59621 ']' 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:25.855 10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:26.114 [2024-11-15 10:36:56.421471] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:26.114 [2024-11-15 10:36:56.421972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59621 ] 00:08:26.114 [2024-11-15 10:36:56.640091] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59603 has claimed it. 00:08:26.114 [2024-11-15 10:36:56.640518] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:26.681 ERROR: process (pid: 59621) is no longer running 00:08:26.681 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59621) - No such process 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59603 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59603 ']' 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59603 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59603 00:08:26.681 killing process with pid 59603 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59603' 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59603 00:08:26.681 10:36:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59603 00:08:29.213 ************************************ 00:08:29.213 END TEST locking_overlapped_coremask 00:08:29.213 ************************************ 00:08:29.213 00:08:29.213 real 0m4.259s 00:08:29.213 user 0m11.686s 00:08:29.213 sys 0m0.539s 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:29.213 10:36:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:29.213 10:36:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:29.213 10:36:59 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.213 10:36:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.213 ************************************ 00:08:29.213 START TEST locking_overlapped_coremask_via_rpc 00:08:29.213 ************************************ 00:08:29.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59685 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59685 /var/tmp/spdk.sock 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59685 ']' 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:29.213 10:36:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.213 [2024-11-15 10:36:59.480756] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:29.213 [2024-11-15 10:36:59.480979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59685 ] 00:08:29.213 [2024-11-15 10:36:59.699779] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:29.213 [2024-11-15 10:36:59.699911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.472 [2024-11-15 10:36:59.812471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.472 [2024-11-15 10:36:59.812548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.472 [2024-11-15 10:36:59.812550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:30.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59714 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59714 /var/tmp/spdk2.sock 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59714 ']' 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:30.406 10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.406 [2024-11-15 10:37:00.756225] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:30.406 [2024-11-15 10:37:00.756443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59714 ] 00:08:30.664 [2024-11-15 10:37:00.975034] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:30.664 [2024-11-15 10:37:00.975141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.664 [2024-11-15 10:37:01.206250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.664 [2024-11-15 10:37:01.206302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.664 [2024-11-15 10:37:01.206311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.592 [2024-11-15 10:37:02.844668] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59685 has claimed it. 00:08:32.592 request: 00:08:32.592 { 00:08:32.592 "method": "framework_enable_cpumask_locks", 00:08:32.592 "req_id": 1 00:08:32.592 } 00:08:32.592 Got JSON-RPC error response 00:08:32.592 response: 00:08:32.592 { 00:08:32.592 "code": -32603, 00:08:32.592 "message": "Failed to claim CPU core: 2" 00:08:32.592 } 00:08:32.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59685 /var/tmp/spdk.sock 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59685 ']' 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.592 10:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.850 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:32.850 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:32.850 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59714 /var/tmp/spdk2.sock 00:08:32.850 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59714 ']' 00:08:32.850 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:32.850 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.850 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:32.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:32.850 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.850 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.107 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.107 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:33.107 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:33.107 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:33.107 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:33.107 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:33.107 00:08:33.107 real 0m4.260s 00:08:33.107 user 0m1.804s 00:08:33.107 sys 0m0.217s 00:08:33.107 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.107 10:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.107 ************************************ 00:08:33.107 END TEST locking_overlapped_coremask_via_rpc 00:08:33.107 ************************************ 00:08:33.107 10:37:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:33.107 10:37:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59685 ]] 00:08:33.107 10:37:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59685 00:08:33.107 10:37:03 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59685 ']' 00:08:33.107 10:37:03 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59685 00:08:33.107 10:37:03 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:33.107 10:37:03 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:33.107 10:37:03 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59685 00:08:33.107 killing process with pid 59685 00:08:33.107 10:37:03 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:33.107 10:37:03 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:33.108 10:37:03 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59685' 00:08:33.108 10:37:03 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59685 00:08:33.108 10:37:03 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59685 00:08:35.635 10:37:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59714 ]] 00:08:35.635 10:37:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59714 00:08:35.635 10:37:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59714 ']' 00:08:35.635 10:37:05 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59714 00:08:35.635 10:37:05 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:35.635 10:37:05 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:35.635 10:37:05 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59714 00:08:35.635 killing process with pid 59714 00:08:35.635 10:37:05 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:35.635 10:37:05 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:35.635 10:37:05 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59714' 00:08:35.635 10:37:05 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59714 00:08:35.635 10:37:05 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59714 00:08:37.534 10:37:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:37.534 10:37:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:37.534 10:37:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59685 ]] 00:08:37.534 10:37:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59685 00:08:37.534 Process with pid 59685 is not found 00:08:37.534 10:37:07 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59685 ']' 00:08:37.534 10:37:07 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59685 00:08:37.534 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59685) - No such process 00:08:37.534 10:37:07 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59685 is not found' 00:08:37.534 10:37:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59714 ]] 00:08:37.534 10:37:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59714 00:08:37.534 10:37:07 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59714 ']' 00:08:37.534 10:37:07 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59714 00:08:37.534 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59714) - No such process 00:08:37.534 Process with pid 59714 is not found 00:08:37.534 10:37:07 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59714 is not found' 00:08:37.534 10:37:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:37.534 00:08:37.534 real 0m46.583s 00:08:37.534 user 1m21.781s 00:08:37.534 sys 0m6.194s 00:08:37.534 ************************************ 00:08:37.534 END TEST cpu_locks 00:08:37.534 ************************************ 00:08:37.534 10:37:07 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.534 10:37:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.534 ************************************ 00:08:37.534 END TEST event 00:08:37.534 ************************************ 00:08:37.534 00:08:37.534 real 1m19.443s 00:08:37.534 user 2m28.608s 00:08:37.534 sys 0m9.936s 00:08:37.534 10:37:08 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.534 10:37:08 event -- common/autotest_common.sh@10 -- # set +x 00:08:37.534 10:37:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:37.534 10:37:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:37.534 10:37:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:37.534 10:37:08 -- common/autotest_common.sh@10 -- # set +x 00:08:37.534 ************************************ 00:08:37.534 START TEST thread 00:08:37.534 ************************************ 00:08:37.534 10:37:08 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:37.812 * Looking for test storage... 00:08:37.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:37.812 10:37:08 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:37.812 10:37:08 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:37.812 10:37:08 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:37.812 10:37:08 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:37.812 10:37:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.812 10:37:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.812 10:37:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.812 10:37:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.812 10:37:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.812 10:37:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.812 10:37:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.812 10:37:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.812 10:37:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.813 10:37:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.813 10:37:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.813 10:37:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:37.813 10:37:08 thread -- scripts/common.sh@345 -- # : 1 00:08:37.813 10:37:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.813 10:37:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.813 10:37:08 thread -- scripts/common.sh@365 -- # decimal 1 00:08:37.813 10:37:08 thread -- scripts/common.sh@353 -- # local d=1 00:08:37.813 10:37:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.813 10:37:08 thread -- scripts/common.sh@355 -- # echo 1 00:08:37.813 10:37:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.813 10:37:08 thread -- scripts/common.sh@366 -- # decimal 2 00:08:37.813 10:37:08 thread -- scripts/common.sh@353 -- # local d=2 00:08:37.813 10:37:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.813 10:37:08 thread -- scripts/common.sh@355 -- # echo 2 00:08:37.813 10:37:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.813 10:37:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.813 10:37:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.813 10:37:08 thread -- scripts/common.sh@368 -- # return 0 00:08:37.813 10:37:08 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.813 10:37:08 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:37.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.813 --rc genhtml_branch_coverage=1 00:08:37.813 --rc genhtml_function_coverage=1 00:08:37.813 --rc genhtml_legend=1 00:08:37.813 --rc geninfo_all_blocks=1 00:08:37.813 --rc geninfo_unexecuted_blocks=1 00:08:37.813 00:08:37.813 ' 00:08:37.813 10:37:08 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:37.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.813 --rc genhtml_branch_coverage=1 00:08:37.813 --rc genhtml_function_coverage=1 00:08:37.813 --rc genhtml_legend=1 00:08:37.813 --rc geninfo_all_blocks=1 00:08:37.813 --rc geninfo_unexecuted_blocks=1 00:08:37.813 00:08:37.813 ' 00:08:37.813 10:37:08 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:37.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.813 --rc genhtml_branch_coverage=1 00:08:37.813 --rc genhtml_function_coverage=1 00:08:37.813 --rc genhtml_legend=1 00:08:37.813 --rc geninfo_all_blocks=1 00:08:37.813 --rc geninfo_unexecuted_blocks=1 00:08:37.813 00:08:37.813 ' 00:08:37.813 10:37:08 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:37.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.813 --rc genhtml_branch_coverage=1 00:08:37.813 --rc genhtml_function_coverage=1 00:08:37.813 --rc genhtml_legend=1 00:08:37.813 --rc geninfo_all_blocks=1 00:08:37.813 --rc geninfo_unexecuted_blocks=1 00:08:37.813 00:08:37.813 ' 00:08:37.813 10:37:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:37.813 10:37:08 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:37.813 10:37:08 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:37.813 10:37:08 thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.813 ************************************ 00:08:37.813 START TEST thread_poller_perf 00:08:37.813 ************************************ 00:08:37.813 10:37:08 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:37.813 [2024-11-15 10:37:08.294927] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:37.813 [2024-11-15 10:37:08.295290] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59896 ] 00:08:38.071 [2024-11-15 10:37:08.479739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.071 [2024-11-15 10:37:08.617960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.071 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:39.443 [2024-11-15T10:37:10.003Z] ====================================== 00:08:39.443 [2024-11-15T10:37:10.003Z] busy:2212790448 (cyc) 00:08:39.443 [2024-11-15T10:37:10.003Z] total_run_count: 291000 00:08:39.443 [2024-11-15T10:37:10.003Z] tsc_hz: 2200000000 (cyc) 00:08:39.443 [2024-11-15T10:37:10.003Z] ====================================== 00:08:39.443 [2024-11-15T10:37:10.003Z] poller_cost: 7604 (cyc), 3456 (nsec) 00:08:39.443 00:08:39.443 ************************************ 00:08:39.443 END TEST thread_poller_perf 00:08:39.443 ************************************ 00:08:39.443 real 0m1.609s 00:08:39.443 user 0m1.404s 00:08:39.443 sys 0m0.092s 00:08:39.443 10:37:09 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.443 10:37:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 10:37:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:39.443 10:37:09 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:39.443 10:37:09 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.443 10:37:09 thread -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 ************************************ 00:08:39.443 START TEST thread_poller_perf 00:08:39.443 ************************************ 00:08:39.443 10:37:09 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:39.443 [2024-11-15 10:37:09.940229] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:39.443 [2024-11-15 10:37:09.940413] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59933 ] 00:08:39.700 [2024-11-15 10:37:10.119521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.700 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:39.700 [2024-11-15 10:37:10.246170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.073 [2024-11-15T10:37:11.633Z] ====================================== 00:08:41.073 [2024-11-15T10:37:11.633Z] busy:2205066390 (cyc) 00:08:41.073 [2024-11-15T10:37:11.633Z] total_run_count: 3494000 00:08:41.073 [2024-11-15T10:37:11.633Z] tsc_hz: 2200000000 (cyc) 00:08:41.073 [2024-11-15T10:37:11.633Z] ====================================== 00:08:41.073 [2024-11-15T10:37:11.633Z] poller_cost: 631 (cyc), 286 (nsec) 00:08:41.073 ************************************ 00:08:41.073 END TEST thread_poller_perf 00:08:41.073 ************************************ 00:08:41.073 00:08:41.073 real 0m1.571s 00:08:41.073 user 0m1.366s 00:08:41.073 sys 0m0.094s 00:08:41.073 10:37:11 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.073 10:37:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:41.073 10:37:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:41.073 ************************************ 00:08:41.073 END TEST thread 00:08:41.073 ************************************ 00:08:41.073 00:08:41.073 real 0m3.451s 00:08:41.073 user 0m2.916s 00:08:41.073 sys 0m0.310s 00:08:41.073 10:37:11 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.073 10:37:11 thread -- common/autotest_common.sh@10 -- # set +x 00:08:41.073 10:37:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:41.073 10:37:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:41.073 10:37:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:41.073 10:37:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.073 10:37:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.073 ************************************ 00:08:41.073 START TEST app_cmdline 00:08:41.073 ************************************ 00:08:41.073 10:37:11 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:41.332 * Looking for test storage... 00:08:41.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.332 10:37:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:41.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.332 --rc genhtml_branch_coverage=1 00:08:41.332 --rc genhtml_function_coverage=1 00:08:41.332 --rc genhtml_legend=1 00:08:41.332 --rc geninfo_all_blocks=1 00:08:41.332 --rc geninfo_unexecuted_blocks=1 00:08:41.332 00:08:41.332 ' 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:41.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.332 --rc genhtml_branch_coverage=1 00:08:41.332 --rc genhtml_function_coverage=1 00:08:41.332 --rc genhtml_legend=1 00:08:41.332 --rc geninfo_all_blocks=1 00:08:41.332 --rc geninfo_unexecuted_blocks=1 00:08:41.332 00:08:41.332 ' 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:41.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.332 --rc genhtml_branch_coverage=1 00:08:41.332 --rc genhtml_function_coverage=1 00:08:41.332 --rc genhtml_legend=1 00:08:41.332 --rc geninfo_all_blocks=1 00:08:41.332 --rc geninfo_unexecuted_blocks=1 00:08:41.332 00:08:41.332 ' 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:41.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.332 --rc genhtml_branch_coverage=1 00:08:41.332 --rc genhtml_function_coverage=1 00:08:41.332 --rc genhtml_legend=1 00:08:41.332 --rc geninfo_all_blocks=1 00:08:41.332 --rc geninfo_unexecuted_blocks=1 00:08:41.332 00:08:41.332 ' 00:08:41.332 10:37:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:41.332 10:37:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60016 00:08:41.332 10:37:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:41.332 10:37:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60016 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60016 ']' 00:08:41.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:41.332 10:37:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:41.590 [2024-11-15 10:37:11.899104] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:41.590 [2024-11-15 10:37:11.899528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60016 ] 00:08:41.590 [2024-11-15 10:37:12.084141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.848 [2024-11-15 10:37:12.186980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.413 10:37:12 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:42.413 10:37:12 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:42.413 10:37:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:42.979 { 00:08:42.979 "version": "SPDK v25.01-pre git sha1 59da1a1d7", 00:08:42.979 "fields": { 00:08:42.979 "major": 25, 00:08:42.979 "minor": 1, 00:08:42.979 "patch": 0, 00:08:42.979 "suffix": "-pre", 00:08:42.979 "commit": "59da1a1d7" 00:08:42.979 } 00:08:42.979 } 00:08:42.979 10:37:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:42.979 10:37:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:42.979 10:37:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:42.979 10:37:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:42.979 10:37:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:42.979 10:37:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.979 10:37:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.979 10:37:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:42.979 10:37:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:42.979 10:37:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:42.979 10:37:13 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.236 request: 00:08:43.236 { 00:08:43.236 "method": "env_dpdk_get_mem_stats", 00:08:43.236 "req_id": 1 00:08:43.236 } 00:08:43.236 Got JSON-RPC error response 00:08:43.236 response: 00:08:43.236 { 00:08:43.236 "code": -32601, 00:08:43.236 "message": "Method not found" 00:08:43.236 } 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.236 10:37:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60016 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60016 ']' 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60016 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60016 00:08:43.236 killing process with pid 60016 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60016' 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@971 -- # kill 60016 00:08:43.236 10:37:13 app_cmdline -- common/autotest_common.sh@976 -- # wait 60016 00:08:45.172 ************************************ 00:08:45.172 END TEST app_cmdline 00:08:45.172 ************************************ 00:08:45.172 00:08:45.172 real 0m4.138s 00:08:45.172 user 0m4.762s 00:08:45.172 sys 0m0.511s 00:08:45.172 10:37:15 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.172 10:37:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:45.431 10:37:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:45.431 10:37:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:45.431 10:37:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.431 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:08:45.431 ************************************ 00:08:45.431 START TEST version 00:08:45.431 ************************************ 00:08:45.431 10:37:15 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:45.431 * Looking for test storage... 00:08:45.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:45.432 10:37:15 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.432 10:37:15 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.432 10:37:15 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.432 10:37:15 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.432 10:37:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.432 10:37:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.432 10:37:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.432 10:37:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.432 10:37:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.432 10:37:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.432 10:37:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.432 10:37:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.432 10:37:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.432 10:37:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.432 10:37:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.432 10:37:15 version -- scripts/common.sh@344 -- # case "$op" in 00:08:45.432 10:37:15 version -- scripts/common.sh@345 -- # : 1 00:08:45.432 10:37:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.432 10:37:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.432 10:37:15 version -- scripts/common.sh@365 -- # decimal 1 00:08:45.432 10:37:15 version -- scripts/common.sh@353 -- # local d=1 00:08:45.432 10:37:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.432 10:37:15 version -- scripts/common.sh@355 -- # echo 1 00:08:45.432 10:37:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.432 10:37:15 version -- scripts/common.sh@366 -- # decimal 2 00:08:45.432 10:37:15 version -- scripts/common.sh@353 -- # local d=2 00:08:45.432 10:37:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.432 10:37:15 version -- scripts/common.sh@355 -- # echo 2 00:08:45.432 10:37:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.432 10:37:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.432 10:37:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.432 10:37:15 version -- scripts/common.sh@368 -- # return 0 00:08:45.432 10:37:15 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.432 10:37:15 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.432 --rc genhtml_branch_coverage=1 00:08:45.432 --rc genhtml_function_coverage=1 00:08:45.432 --rc genhtml_legend=1 00:08:45.432 --rc geninfo_all_blocks=1 00:08:45.432 --rc geninfo_unexecuted_blocks=1 00:08:45.432 00:08:45.432 ' 00:08:45.432 10:37:15 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.432 --rc genhtml_branch_coverage=1 00:08:45.432 --rc genhtml_function_coverage=1 00:08:45.432 --rc genhtml_legend=1 00:08:45.432 --rc geninfo_all_blocks=1 00:08:45.432 --rc geninfo_unexecuted_blocks=1 00:08:45.432 00:08:45.432 ' 00:08:45.432 10:37:15 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.432 --rc genhtml_branch_coverage=1 00:08:45.432 --rc genhtml_function_coverage=1 00:08:45.432 --rc genhtml_legend=1 00:08:45.432 --rc geninfo_all_blocks=1 00:08:45.432 --rc geninfo_unexecuted_blocks=1 00:08:45.432 00:08:45.432 ' 00:08:45.432 10:37:15 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.432 --rc genhtml_branch_coverage=1 00:08:45.432 --rc genhtml_function_coverage=1 00:08:45.432 --rc genhtml_legend=1 00:08:45.432 --rc geninfo_all_blocks=1 00:08:45.432 --rc geninfo_unexecuted_blocks=1 00:08:45.432 00:08:45.432 ' 00:08:45.432 10:37:15 version -- app/version.sh@17 -- # get_header_version major 00:08:45.432 10:37:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:45.432 10:37:15 version -- app/version.sh@14 -- # cut -f2 00:08:45.432 10:37:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:45.432 10:37:15 version -- app/version.sh@17 -- # major=25 00:08:45.432 10:37:15 version -- app/version.sh@18 -- # get_header_version minor 00:08:45.432 10:37:15 version -- app/version.sh@14 -- # cut -f2 00:08:45.432 10:37:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:45.432 10:37:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:45.432 10:37:15 version -- app/version.sh@18 -- # minor=1 00:08:45.432 10:37:15 version -- app/version.sh@19 -- # get_header_version patch 00:08:45.432 10:37:15 version -- app/version.sh@14 -- # cut -f2 00:08:45.432 10:37:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:45.432 10:37:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:45.432 10:37:15 version -- app/version.sh@19 -- # patch=0 00:08:45.432 10:37:15 version -- app/version.sh@20 -- # get_header_version suffix 00:08:45.432 10:37:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:45.432 10:37:15 version -- app/version.sh@14 -- # cut -f2 00:08:45.432 10:37:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:45.432 10:37:15 version -- app/version.sh@20 -- # suffix=-pre 00:08:45.432 10:37:15 version -- app/version.sh@22 -- # version=25.1 00:08:45.432 10:37:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:45.432 10:37:15 version -- app/version.sh@28 -- # version=25.1rc0 00:08:45.432 10:37:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:45.432 10:37:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:45.691 10:37:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:45.691 10:37:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:45.691 00:08:45.691 real 0m0.286s 00:08:45.691 user 0m0.205s 00:08:45.691 sys 0m0.117s 00:08:45.691 ************************************ 00:08:45.691 END TEST version 00:08:45.691 ************************************ 00:08:45.691 10:37:16 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.691 10:37:16 version -- common/autotest_common.sh@10 -- # set +x 00:08:45.691 10:37:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:45.691 10:37:16 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:45.691 10:37:16 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:45.691 10:37:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:45.691 10:37:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.691 10:37:16 -- common/autotest_common.sh@10 -- # set +x 00:08:45.691 ************************************ 00:08:45.691 START TEST bdev_raid 00:08:45.691 ************************************ 00:08:45.691 10:37:16 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:45.691 * Looking for test storage... 00:08:45.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:45.691 10:37:16 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.691 10:37:16 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.691 10:37:16 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.691 10:37:16 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.691 10:37:16 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.691 10:37:16 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.691 10:37:16 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.691 10:37:16 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.691 10:37:16 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.691 10:37:16 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.691 10:37:16 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.691 10:37:16 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.692 10:37:16 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:45.950 10:37:16 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.950 10:37:16 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:45.950 10:37:16 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:45.950 10:37:16 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.950 10:37:16 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:45.950 10:37:16 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.951 10:37:16 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.951 10:37:16 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.951 10:37:16 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:45.951 10:37:16 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.951 10:37:16 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.951 --rc genhtml_branch_coverage=1 00:08:45.951 --rc genhtml_function_coverage=1 00:08:45.951 --rc genhtml_legend=1 00:08:45.951 --rc geninfo_all_blocks=1 00:08:45.951 --rc geninfo_unexecuted_blocks=1 00:08:45.951 00:08:45.951 ' 00:08:45.951 10:37:16 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.951 --rc genhtml_branch_coverage=1 00:08:45.951 --rc genhtml_function_coverage=1 00:08:45.951 --rc genhtml_legend=1 00:08:45.951 --rc geninfo_all_blocks=1 00:08:45.951 --rc geninfo_unexecuted_blocks=1 00:08:45.951 00:08:45.951 ' 00:08:45.951 10:37:16 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.951 --rc genhtml_branch_coverage=1 00:08:45.951 --rc genhtml_function_coverage=1 00:08:45.951 --rc genhtml_legend=1 00:08:45.951 --rc geninfo_all_blocks=1 00:08:45.951 --rc geninfo_unexecuted_blocks=1 00:08:45.951 00:08:45.951 ' 00:08:45.951 10:37:16 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.951 --rc genhtml_branch_coverage=1 00:08:45.951 --rc genhtml_function_coverage=1 00:08:45.951 --rc genhtml_legend=1 00:08:45.951 --rc geninfo_all_blocks=1 00:08:45.951 --rc geninfo_unexecuted_blocks=1 00:08:45.951 00:08:45.951 ' 00:08:45.951 10:37:16 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:45.951 10:37:16 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:45.951 10:37:16 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:45.951 10:37:16 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:45.951 10:37:16 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:45.951 10:37:16 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:45.951 10:37:16 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:45.951 10:37:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:45.951 10:37:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.951 10:37:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.951 ************************************ 00:08:45.951 START TEST raid1_resize_data_offset_test 00:08:45.951 ************************************ 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60204 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60204' 00:08:45.951 Process raid pid: 60204 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60204 00:08:45.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 60204 ']' 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:45.951 10:37:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.951 [2024-11-15 10:37:16.377976] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:45.951 [2024-11-15 10:37:16.379041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.210 [2024-11-15 10:37:16.564422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.210 [2024-11-15 10:37:16.687332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.469 [2024-11-15 10:37:16.875121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.469 [2024-11-15 10:37:16.875397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.036 malloc0 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.036 malloc1 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.036 null0 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.036 [2024-11-15 10:37:17.549929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:47.036 [2024-11-15 10:37:17.552335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:47.036 [2024-11-15 10:37:17.552428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:47.036 [2024-11-15 10:37:17.552615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:47.036 [2024-11-15 10:37:17.552637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:47.036 [2024-11-15 10:37:17.552963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:47.036 [2024-11-15 10:37:17.553237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:47.036 [2024-11-15 10:37:17.553260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:47.036 [2024-11-15 10:37:17.553620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.036 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.295 10:37:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:47.295 10:37:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:47.295 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.295 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.295 [2024-11-15 10:37:17.618088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:47.295 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.295 10:37:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:47.295 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.295 10:37:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.554 malloc2 00:08:47.554 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.554 10:37:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:47.554 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.554 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.812 [2024-11-15 10:37:18.117092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:47.812 [2024-11-15 10:37:18.133481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.812 [2024-11-15 10:37:18.135791] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60204 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 60204 ']' 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 60204 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60204 00:08:47.812 killing process with pid 60204 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60204' 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 60204 00:08:47.812 10:37:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 60204 00:08:47.812 [2024-11-15 10:37:18.230774] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.812 [2024-11-15 10:37:18.231578] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:47.812 [2024-11-15 10:37:18.231659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.812 [2024-11-15 10:37:18.231686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:47.812 [2024-11-15 10:37:18.262784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.812 [2024-11-15 10:37:18.263424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.812 [2024-11-15 10:37:18.263461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:49.798 [2024-11-15 10:37:19.788404] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.363 ************************************ 00:08:50.363 END TEST raid1_resize_data_offset_test 00:08:50.363 ************************************ 00:08:50.363 10:37:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:50.363 00:08:50.363 real 0m4.524s 00:08:50.363 user 0m4.676s 00:08:50.363 sys 0m0.466s 00:08:50.363 10:37:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.363 10:37:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.363 10:37:20 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:50.363 10:37:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:50.363 10:37:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.363 10:37:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.363 ************************************ 00:08:50.363 START TEST raid0_resize_superblock_test 00:08:50.363 ************************************ 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:08:50.363 Process raid pid: 60287 00:08:50.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60287 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60287' 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60287 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60287 ']' 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:50.363 10:37:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.621 [2024-11-15 10:37:20.934205] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:50.621 [2024-11-15 10:37:20.934571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.621 [2024-11-15 10:37:21.117402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.878 [2024-11-15 10:37:21.245071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.136 [2024-11-15 10:37:21.475400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.136 [2024-11-15 10:37:21.475665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.703 10:37:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:51.703 10:37:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:51.703 10:37:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:51.703 10:37:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.703 10:37:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.961 malloc0 00:08:51.961 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.961 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:51.961 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.961 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.961 [2024-11-15 10:37:22.490914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:51.961 [2024-11-15 10:37:22.491135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.961 [2024-11-15 10:37:22.491194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:51.961 [2024-11-15 10:37:22.491216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.961 [2024-11-15 10:37:22.493872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.961 [2024-11-15 10:37:22.493924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:51.961 pt0 00:08:51.961 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.961 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:51.961 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.961 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 95f9cad0-189e-418c-a7ba-8e1a855ef97a 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 4799b755-18e4-48ed-8889-c168eb9d1229 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 b0108bf1-cd07-4d55-8c0e-dbb03c18c51e 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 [2024-11-15 10:37:22.589979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4799b755-18e4-48ed-8889-c168eb9d1229 is claimed 00:08:52.219 [2024-11-15 10:37:22.590273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b0108bf1-cd07-4d55-8c0e-dbb03c18c51e is claimed 00:08:52.219 [2024-11-15 10:37:22.590512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:52.219 [2024-11-15 10:37:22.590539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:52.219 [2024-11-15 10:37:22.590886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:52.219 [2024-11-15 10:37:22.591197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:52.219 [2024-11-15 10:37:22.591226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:52.219 [2024-11-15 10:37:22.591452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 [2024-11-15 10:37:22.730272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 [2024-11-15 10:37:22.774287] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:52.219 [2024-11-15 10:37:22.774327] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4799b755-18e4-48ed-8889-c168eb9d1229' was resized: old size 131072, new size 204800 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.478 [2024-11-15 10:37:22.782132] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:52.478 [2024-11-15 10:37:22.782164] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b0108bf1-cd07-4d55-8c0e-dbb03c18c51e' was resized: old size 131072, new size 204800 00:08:52.478 [2024-11-15 10:37:22.782204] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.478 [2024-11-15 10:37:22.890366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.478 [2024-11-15 10:37:22.934084] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:52.478 [2024-11-15 10:37:22.934197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:52.478 [2024-11-15 10:37:22.934223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.478 [2024-11-15 10:37:22.934243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:52.478 [2024-11-15 10:37:22.934412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.478 [2024-11-15 10:37:22.934473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.478 [2024-11-15 10:37:22.934496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.478 [2024-11-15 10:37:22.941958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:52.478 [2024-11-15 10:37:22.942024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.478 [2024-11-15 10:37:22.942053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:52.478 [2024-11-15 10:37:22.942072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.478 [2024-11-15 10:37:22.944808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.478 [2024-11-15 10:37:22.944991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:52.478 pt0 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.478 [2024-11-15 10:37:22.947515] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4799b755-18e4-48ed-8889-c168eb9d1229 00:08:52.478 [2024-11-15 10:37:22.947589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4799b755-18e4-48ed-8889-c168eb9d1229 is claimed 00:08:52.478 [2024-11-15 10:37:22.948059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b0108bf1-cd07-4d55-8c0e-dbb03c18c51e 00:08:52.478 [2024-11-15 10:37:22.948411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b0108bf1-cd07-4d55-8c0e-dbb03c18c51e is claimed 00:08:52.478 [2024-11-15 10:37:22.948678] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b0108bf1-cd07-4d55-8c0e-dbb03c18c51e (2) smaller than existing raid bdev Raid (3) 00:08:52.478 [2024-11-15 10:37:22.948733] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 4799b755-18e4-48ed-8889-c168eb9d1229: File exists 00:08:52.478 [2024-11-15 10:37:22.948823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:52.478 [2024-11-15 10:37:22.948848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:52.478 [2024-11-15 10:37:22.949193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:52.478 [2024-11-15 10:37:22.949475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:52.478 [2024-11-15 10:37:22.949500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:52.478 [2024-11-15 10:37:22.949697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.478 [2024-11-15 10:37:22.962285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:52.478 10:37:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:52.478 10:37:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:52.478 10:37:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60287 00:08:52.478 10:37:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60287 ']' 00:08:52.478 10:37:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60287 00:08:52.478 10:37:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:52.478 10:37:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.478 10:37:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60287 00:08:52.736 killing process with pid 60287 00:08:52.736 10:37:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:52.736 10:37:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:52.736 10:37:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60287' 00:08:52.736 10:37:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60287 00:08:52.736 [2024-11-15 10:37:23.038092] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.736 10:37:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60287 00:08:52.736 [2024-11-15 10:37:23.038195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.736 [2024-11-15 10:37:23.038263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.736 [2024-11-15 10:37:23.038279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:54.109 [2024-11-15 10:37:24.297534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.044 10:37:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:55.044 00:08:55.044 real 0m4.457s 00:08:55.044 user 0m4.922s 00:08:55.044 sys 0m0.524s 00:08:55.044 10:37:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.044 ************************************ 00:08:55.044 END TEST raid0_resize_superblock_test 00:08:55.044 ************************************ 00:08:55.044 10:37:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.044 10:37:25 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:55.044 10:37:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:55.044 10:37:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:55.044 10:37:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.044 ************************************ 00:08:55.044 START TEST raid1_resize_superblock_test 00:08:55.044 ************************************ 00:08:55.044 Process raid pid: 60386 00:08:55.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60386 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60386' 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60386 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60386 ']' 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.044 10:37:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.044 [2024-11-15 10:37:25.499101] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:55.044 [2024-11-15 10:37:25.499621] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.302 [2024-11-15 10:37:25.697405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.302 [2024-11-15 10:37:25.833757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.560 [2024-11-15 10:37:26.059292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.560 [2024-11-15 10:37:26.059614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.125 10:37:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.125 10:37:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:56.125 10:37:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:56.125 10:37:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.125 10:37:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.692 malloc0 00:08:56.692 10:37:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.692 10:37:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:56.692 10:37:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.692 10:37:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.692 [2024-11-15 10:37:26.952200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:56.692 [2024-11-15 10:37:26.952277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.692 [2024-11-15 10:37:26.952312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:56.692 [2024-11-15 10:37:26.952331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.692 [2024-11-15 10:37:26.955186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.692 [2024-11-15 10:37:26.955238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:56.692 pt0 00:08:56.692 10:37:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.692 10:37:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:56.692 10:37:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.692 10:37:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.692 4762354d-3ce2-44b7-b4e4-96e43b47556d 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.692 a968c390-95bf-475d-8fe4-a79dcf916709 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.692 25c991bf-9c4d-4e4c-b42e-4566878b6aa6 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.692 [2024-11-15 10:37:27.043943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a968c390-95bf-475d-8fe4-a79dcf916709 is claimed 00:08:56.692 [2024-11-15 10:37:27.044223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 25c991bf-9c4d-4e4c-b42e-4566878b6aa6 is claimed 00:08:56.692 [2024-11-15 10:37:27.044458] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:56.692 [2024-11-15 10:37:27.044485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:56.692 [2024-11-15 10:37:27.044814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:56.692 [2024-11-15 10:37:27.045068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:56.692 [2024-11-15 10:37:27.045086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:56.692 [2024-11-15 10:37:27.045282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:56.692 [2024-11-15 10:37:27.172255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:56.692 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.693 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.693 [2024-11-15 10:37:27.224367] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:56.693 [2024-11-15 10:37:27.224412] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a968c390-95bf-475d-8fe4-a79dcf916709' was resized: old size 131072, new size 204800 00:08:56.693 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.693 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:56.693 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.693 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.693 [2024-11-15 10:37:27.232207] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:56.693 [2024-11-15 10:37:27.232242] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '25c991bf-9c4d-4e4c-b42e-4566878b6aa6' was resized: old size 131072, new size 204800 00:08:56.693 [2024-11-15 10:37:27.232287] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:56.693 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.693 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:56.693 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.693 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.693 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.950 [2024-11-15 10:37:27.336331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.950 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.950 [2024-11-15 10:37:27.388088] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:56.950 [2024-11-15 10:37:27.388193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:56.950 [2024-11-15 10:37:27.388234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:56.950 [2024-11-15 10:37:27.388460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.951 [2024-11-15 10:37:27.388726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.951 [2024-11-15 10:37:27.388838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.951 [2024-11-15 10:37:27.388862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.951 [2024-11-15 10:37:27.395960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:56.951 [2024-11-15 10:37:27.396025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.951 [2024-11-15 10:37:27.396054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:56.951 [2024-11-15 10:37:27.396072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.951 [2024-11-15 10:37:27.398720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.951 [2024-11-15 10:37:27.398899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:56.951 pt0 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:56.951 [2024-11-15 10:37:27.401215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a968c390-95bf-475d-8fe4-a79dcf916709 00:08:56.951 [2024-11-15 10:37:27.401298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a968c390-95bf-475d-8fe4-a79dcf916709 is claimed 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.951 [2024-11-15 10:37:27.401462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 25c991bf-9c4d-4e4c-b42e-4566878b6aa6 00:08:56.951 [2024-11-15 10:37:27.401502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 25c991bf-9c4d-4e4c-b42e-4566878b6aa6 is claimed 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.951 [2024-11-15 10:37:27.401662] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 25c991bf-9c4d-4e4c-b42e-4566878b6aa6 (2) smaller than existing raid bdev Raid (3) 00:08:56.951 [2024-11-15 10:37:27.401699] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev a968c390-95bf-475d-8fe4-a79dcf916709: File exists 00:08:56.951 [2024-11-15 10:37:27.401763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:56.951 [2024-11-15 10:37:27.401783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:56.951 [2024-11-15 10:37:27.402098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:56.951 [2024-11-15 10:37:27.402466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:56.951 [2024-11-15 10:37:27.402490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:56.951 [2024-11-15 10:37:27.402684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:56.951 [2024-11-15 10:37:27.416299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60386 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60386 ']' 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60386 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60386 00:08:56.951 killing process with pid 60386 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60386' 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60386 00:08:56.951 [2024-11-15 10:37:27.493650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.951 10:37:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60386 00:08:56.951 [2024-11-15 10:37:27.493754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.951 [2024-11-15 10:37:27.493831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.951 [2024-11-15 10:37:27.493846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:58.323 [2024-11-15 10:37:28.700620] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.258 ************************************ 00:08:59.258 END TEST raid1_resize_superblock_test 00:08:59.258 ************************************ 00:08:59.258 10:37:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:59.258 00:08:59.258 real 0m4.367s 00:08:59.258 user 0m4.809s 00:08:59.258 sys 0m0.500s 00:08:59.258 10:37:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.258 10:37:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.258 10:37:29 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:59.258 10:37:29 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:59.258 10:37:29 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:59.258 10:37:29 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:59.258 10:37:29 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:59.258 10:37:29 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:59.258 10:37:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:59.258 10:37:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.258 10:37:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.258 ************************************ 00:08:59.258 START TEST raid_function_test_raid0 00:08:59.258 ************************************ 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:59.258 Process raid pid: 60483 00:08:59.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60483 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60483' 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60483 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60483 ']' 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:59.258 10:37:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:59.516 [2024-11-15 10:37:29.886010] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:08:59.516 [2024-11-15 10:37:29.886429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.773 [2024-11-15 10:37:30.086532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.773 [2024-11-15 10:37:30.227684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.030 [2024-11-15 10:37:30.455283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.030 [2024-11-15 10:37:30.455677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.287 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:00.287 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:09:00.287 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:00.287 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.287 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:00.545 Base_1 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:00.545 Base_2 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:00.545 [2024-11-15 10:37:30.913220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:00.545 [2024-11-15 10:37:30.915684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:00.545 [2024-11-15 10:37:30.915917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:00.545 [2024-11-15 10:37:30.916066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:00.545 [2024-11-15 10:37:30.916441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:00.545 [2024-11-15 10:37:30.916634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:00.545 [2024-11-15 10:37:30.916651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:00.545 [2024-11-15 10:37:30.916844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:00.545 10:37:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:00.803 [2024-11-15 10:37:31.341418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:01.060 /dev/nbd0 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.060 1+0 records in 00:09:01.060 1+0 records out 00:09:01.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732741 s, 5.6 MB/s 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:01.060 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:01.318 { 00:09:01.318 "nbd_device": "/dev/nbd0", 00:09:01.318 "bdev_name": "raid" 00:09:01.318 } 00:09:01.318 ]' 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:01.318 { 00:09:01.318 "nbd_device": "/dev/nbd0", 00:09:01.318 "bdev_name": "raid" 00:09:01.318 } 00:09:01.318 ]' 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:01.318 4096+0 records in 00:09:01.318 4096+0 records out 00:09:01.318 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0297727 s, 70.4 MB/s 00:09:01.318 10:37:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:01.575 4096+0 records in 00:09:01.575 4096+0 records out 00:09:01.575 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.343524 s, 6.1 MB/s 00:09:01.575 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:01.832 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:01.833 128+0 records in 00:09:01.833 128+0 records out 00:09:01.833 65536 bytes (66 kB, 64 KiB) copied, 0.000602446 s, 109 MB/s 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:01.833 2035+0 records in 00:09:01.833 2035+0 records out 00:09:01.833 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0104503 s, 99.7 MB/s 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:01.833 456+0 records in 00:09:01.833 456+0 records out 00:09:01.833 233472 bytes (233 kB, 228 KiB) copied, 0.00298009 s, 78.3 MB/s 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.833 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:02.090 [2024-11-15 10:37:32.538406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:02.090 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:02.348 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:02.348 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.348 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60483 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60483 ']' 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60483 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60483 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:02.610 killing process with pid 60483 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60483' 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60483 00:09:02.610 [2024-11-15 10:37:32.966056] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.610 10:37:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60483 00:09:02.610 [2024-11-15 10:37:32.966177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.610 [2024-11-15 10:37:32.966242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.610 [2024-11-15 10:37:32.966265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:02.610 [2024-11-15 10:37:33.142318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.007 ************************************ 00:09:04.007 END TEST raid_function_test_raid0 00:09:04.007 ************************************ 00:09:04.007 10:37:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:09:04.007 00:09:04.007 real 0m4.351s 00:09:04.007 user 0m5.445s 00:09:04.007 sys 0m0.967s 00:09:04.007 10:37:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.007 10:37:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:04.007 10:37:34 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:09:04.007 10:37:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:04.007 10:37:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:04.007 10:37:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.007 ************************************ 00:09:04.007 START TEST raid_function_test_concat 00:09:04.007 ************************************ 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60612 00:09:04.007 Process raid pid: 60612 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60612' 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60612 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60612 ']' 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:04.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:04.007 10:37:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:04.007 [2024-11-15 10:37:34.314876] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:04.007 [2024-11-15 10:37:34.315082] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.007 [2024-11-15 10:37:34.496714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.265 [2024-11-15 10:37:34.601323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.265 [2024-11-15 10:37:34.786758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.265 [2024-11-15 10:37:34.786811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.831 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:04.831 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:09:04.831 10:37:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:04.831 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.831 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:05.091 Base_1 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:05.091 Base_2 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:05.091 [2024-11-15 10:37:35.470045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:05.091 [2024-11-15 10:37:35.472906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:05.091 [2024-11-15 10:37:35.473049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:05.091 [2024-11-15 10:37:35.473083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:05.091 [2024-11-15 10:37:35.473549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:05.091 [2024-11-15 10:37:35.473771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:05.091 [2024-11-15 10:37:35.473796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:05.091 [2024-11-15 10:37:35.474050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:05.091 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:05.350 [2024-11-15 10:37:35.754438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:05.350 /dev/nbd0 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.350 1+0 records in 00:09:05.350 1+0 records out 00:09:05.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459872 s, 8.9 MB/s 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:05.350 10:37:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:05.609 { 00:09:05.609 "nbd_device": "/dev/nbd0", 00:09:05.609 "bdev_name": "raid" 00:09:05.609 } 00:09:05.609 ]' 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:05.609 { 00:09:05.609 "nbd_device": "/dev/nbd0", 00:09:05.609 "bdev_name": "raid" 00:09:05.609 } 00:09:05.609 ]' 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:05.609 4096+0 records in 00:09:05.609 4096+0 records out 00:09:05.609 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0313192 s, 67.0 MB/s 00:09:05.609 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:06.175 4096+0 records in 00:09:06.175 4096+0 records out 00:09:06.175 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.326552 s, 6.4 MB/s 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:06.175 128+0 records in 00:09:06.175 128+0 records out 00:09:06.175 65536 bytes (66 kB, 64 KiB) copied, 0.000461069 s, 142 MB/s 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:06.175 2035+0 records in 00:09:06.175 2035+0 records out 00:09:06.175 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00683447 s, 152 MB/s 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:06.175 456+0 records in 00:09:06.175 456+0 records out 00:09:06.175 233472 bytes (233 kB, 228 KiB) copied, 0.00145347 s, 161 MB/s 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.175 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:06.433 [2024-11-15 10:37:36.839329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:06.433 10:37:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60612 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60612 ']' 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60612 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60612 00:09:06.692 killing process with pid 60612 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60612' 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60612 00:09:06.692 [2024-11-15 10:37:37.207094] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.692 10:37:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60612 00:09:06.692 [2024-11-15 10:37:37.207234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.692 [2024-11-15 10:37:37.207305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.692 [2024-11-15 10:37:37.207324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:06.949 [2024-11-15 10:37:37.381745] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.884 10:37:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:09:07.884 00:09:07.884 real 0m4.186s 00:09:07.884 user 0m5.235s 00:09:07.884 sys 0m0.884s 00:09:07.884 10:37:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:07.884 10:37:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:07.884 ************************************ 00:09:07.884 END TEST raid_function_test_concat 00:09:07.884 ************************************ 00:09:07.884 10:37:38 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:09:07.884 10:37:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:07.884 10:37:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:07.884 10:37:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.884 ************************************ 00:09:07.884 START TEST raid0_resize_test 00:09:07.884 ************************************ 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60746 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60746' 00:09:07.884 Process raid pid: 60746 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60746 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60746 ']' 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:07.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:07.884 10:37:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.142 [2024-11-15 10:37:38.514539] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:08.142 [2024-11-15 10:37:38.514685] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.142 [2024-11-15 10:37:38.686692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.400 [2024-11-15 10:37:38.791519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.658 [2024-11-15 10:37:38.975472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.658 [2024-11-15 10:37:38.975530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.222 Base_1 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.222 Base_2 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.222 [2024-11-15 10:37:39.568089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:09.222 [2024-11-15 10:37:39.570323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:09.222 [2024-11-15 10:37:39.570416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:09.222 [2024-11-15 10:37:39.570438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:09.222 [2024-11-15 10:37:39.570751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:09.222 [2024-11-15 10:37:39.570915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:09.222 [2024-11-15 10:37:39.570931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:09.222 [2024-11-15 10:37:39.571098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.222 [2024-11-15 10:37:39.576073] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:09.222 [2024-11-15 10:37:39.576112] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:09.222 true 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.222 [2024-11-15 10:37:39.588278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.222 [2024-11-15 10:37:39.660128] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:09.222 [2024-11-15 10:37:39.660165] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:09.222 [2024-11-15 10:37:39.660209] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:09.222 true 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.222 [2024-11-15 10:37:39.672341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60746 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60746 ']' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60746 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60746 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:09.222 killing process with pid 60746 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60746' 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60746 00:09:09.222 [2024-11-15 10:37:39.745739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.222 10:37:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60746 00:09:09.222 [2024-11-15 10:37:39.745853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.222 [2024-11-15 10:37:39.745919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.222 [2024-11-15 10:37:39.745934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:09.222 [2024-11-15 10:37:39.760823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:10.176 10:37:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:10.176 00:09:10.176 real 0m2.304s 00:09:10.176 user 0m2.703s 00:09:10.176 sys 0m0.281s 00:09:10.176 ************************************ 00:09:10.176 END TEST raid0_resize_test 00:09:10.176 ************************************ 00:09:10.176 10:37:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.176 10:37:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.434 10:37:40 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:09:10.434 10:37:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:10.434 10:37:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.434 10:37:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.434 ************************************ 00:09:10.434 START TEST raid1_resize_test 00:09:10.434 ************************************ 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60807 00:09:10.434 Process raid pid: 60807 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60807' 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60807 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60807 ']' 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:10.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:10.434 10:37:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.434 [2024-11-15 10:37:40.866543] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:10.434 [2024-11-15 10:37:40.866691] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.692 [2024-11-15 10:37:41.046957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.692 [2024-11-15 10:37:41.174972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.950 [2024-11-15 10:37:41.378011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.950 [2024-11-15 10:37:41.378055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.516 Base_1 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.516 Base_2 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.516 [2024-11-15 10:37:41.862637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:11.516 [2024-11-15 10:37:41.864859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:11.516 [2024-11-15 10:37:41.864947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:11.516 [2024-11-15 10:37:41.864968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:11.516 [2024-11-15 10:37:41.865295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:11.516 [2024-11-15 10:37:41.865472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:11.516 [2024-11-15 10:37:41.865489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:11.516 [2024-11-15 10:37:41.865669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.516 [2024-11-15 10:37:41.870630] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:11.516 [2024-11-15 10:37:41.870671] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:11.516 true 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:11.516 [2024-11-15 10:37:41.882843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.516 [2024-11-15 10:37:41.942685] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:11.516 [2024-11-15 10:37:41.942725] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:11.516 [2024-11-15 10:37:41.942765] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:09:11.516 true 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:11.516 [2024-11-15 10:37:41.954870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.516 10:37:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60807 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60807 ']' 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60807 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60807 00:09:11.516 10:37:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:11.516 killing process with pid 60807 00:09:11.517 10:37:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:11.517 10:37:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60807' 00:09:11.517 10:37:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60807 00:09:11.517 [2024-11-15 10:37:42.057610] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.517 10:37:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60807 00:09:11.517 [2024-11-15 10:37:42.057714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.517 [2024-11-15 10:37:42.058283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.517 [2024-11-15 10:37:42.058321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:11.517 [2024-11-15 10:37:42.072982] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.890 10:37:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:12.890 00:09:12.890 real 0m2.288s 00:09:12.890 user 0m2.642s 00:09:12.890 sys 0m0.289s 00:09:12.890 ************************************ 00:09:12.890 END TEST raid1_resize_test 00:09:12.890 ************************************ 00:09:12.890 10:37:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:12.890 10:37:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.890 10:37:43 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:12.890 10:37:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:12.890 10:37:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:12.890 10:37:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:12.890 10:37:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:12.890 10:37:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.890 ************************************ 00:09:12.890 START TEST raid_state_function_test 00:09:12.890 ************************************ 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:12.890 Process raid pid: 60864 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60864 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60864' 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60864 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60864 ']' 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:12.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:12.890 10:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.890 [2024-11-15 10:37:43.220391] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:12.890 [2024-11-15 10:37:43.221314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.890 [2024-11-15 10:37:43.408378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.148 [2024-11-15 10:37:43.540179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.406 [2024-11-15 10:37:43.776601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.406 [2024-11-15 10:37:43.776897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.971 [2024-11-15 10:37:44.241678] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.971 [2024-11-15 10:37:44.241907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.971 [2024-11-15 10:37:44.242096] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.971 [2024-11-15 10:37:44.242192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.971 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.971 "name": "Existed_Raid", 00:09:13.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.971 "strip_size_kb": 64, 00:09:13.971 "state": "configuring", 00:09:13.971 "raid_level": "raid0", 00:09:13.971 "superblock": false, 00:09:13.971 "num_base_bdevs": 2, 00:09:13.971 "num_base_bdevs_discovered": 0, 00:09:13.971 "num_base_bdevs_operational": 2, 00:09:13.971 "base_bdevs_list": [ 00:09:13.971 { 00:09:13.971 "name": "BaseBdev1", 00:09:13.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.971 "is_configured": false, 00:09:13.971 "data_offset": 0, 00:09:13.971 "data_size": 0 00:09:13.972 }, 00:09:13.972 { 00:09:13.972 "name": "BaseBdev2", 00:09:13.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.972 "is_configured": false, 00:09:13.972 "data_offset": 0, 00:09:13.972 "data_size": 0 00:09:13.972 } 00:09:13.972 ] 00:09:13.972 }' 00:09:13.972 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.972 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.229 [2024-11-15 10:37:44.769782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.229 [2024-11-15 10:37:44.769831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.229 [2024-11-15 10:37:44.777741] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.229 [2024-11-15 10:37:44.777813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.229 [2024-11-15 10:37:44.777832] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.229 [2024-11-15 10:37:44.777852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.229 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.488 [2024-11-15 10:37:44.821767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.488 BaseBdev1 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.488 [ 00:09:14.488 { 00:09:14.488 "name": "BaseBdev1", 00:09:14.488 "aliases": [ 00:09:14.488 "08616a59-8cec-4973-9e9e-1cffd5f7d351" 00:09:14.488 ], 00:09:14.488 "product_name": "Malloc disk", 00:09:14.488 "block_size": 512, 00:09:14.488 "num_blocks": 65536, 00:09:14.488 "uuid": "08616a59-8cec-4973-9e9e-1cffd5f7d351", 00:09:14.488 "assigned_rate_limits": { 00:09:14.488 "rw_ios_per_sec": 0, 00:09:14.488 "rw_mbytes_per_sec": 0, 00:09:14.488 "r_mbytes_per_sec": 0, 00:09:14.488 "w_mbytes_per_sec": 0 00:09:14.488 }, 00:09:14.488 "claimed": true, 00:09:14.488 "claim_type": "exclusive_write", 00:09:14.488 "zoned": false, 00:09:14.488 "supported_io_types": { 00:09:14.488 "read": true, 00:09:14.488 "write": true, 00:09:14.488 "unmap": true, 00:09:14.488 "flush": true, 00:09:14.488 "reset": true, 00:09:14.488 "nvme_admin": false, 00:09:14.488 "nvme_io": false, 00:09:14.488 "nvme_io_md": false, 00:09:14.488 "write_zeroes": true, 00:09:14.488 "zcopy": true, 00:09:14.488 "get_zone_info": false, 00:09:14.488 "zone_management": false, 00:09:14.488 "zone_append": false, 00:09:14.488 "compare": false, 00:09:14.488 "compare_and_write": false, 00:09:14.488 "abort": true, 00:09:14.488 "seek_hole": false, 00:09:14.488 "seek_data": false, 00:09:14.488 "copy": true, 00:09:14.488 "nvme_iov_md": false 00:09:14.488 }, 00:09:14.488 "memory_domains": [ 00:09:14.488 { 00:09:14.488 "dma_device_id": "system", 00:09:14.488 "dma_device_type": 1 00:09:14.488 }, 00:09:14.488 { 00:09:14.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.488 "dma_device_type": 2 00:09:14.488 } 00:09:14.488 ], 00:09:14.488 "driver_specific": {} 00:09:14.488 } 00:09:14.488 ] 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.488 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.488 "name": "Existed_Raid", 00:09:14.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.488 "strip_size_kb": 64, 00:09:14.488 "state": "configuring", 00:09:14.488 "raid_level": "raid0", 00:09:14.488 "superblock": false, 00:09:14.488 "num_base_bdevs": 2, 00:09:14.488 "num_base_bdevs_discovered": 1, 00:09:14.488 "num_base_bdevs_operational": 2, 00:09:14.488 "base_bdevs_list": [ 00:09:14.488 { 00:09:14.488 "name": "BaseBdev1", 00:09:14.488 "uuid": "08616a59-8cec-4973-9e9e-1cffd5f7d351", 00:09:14.488 "is_configured": true, 00:09:14.488 "data_offset": 0, 00:09:14.488 "data_size": 65536 00:09:14.488 }, 00:09:14.489 { 00:09:14.489 "name": "BaseBdev2", 00:09:14.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.489 "is_configured": false, 00:09:14.489 "data_offset": 0, 00:09:14.489 "data_size": 0 00:09:14.489 } 00:09:14.489 ] 00:09:14.489 }' 00:09:14.489 10:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.489 10:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.054 [2024-11-15 10:37:45.377947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.054 [2024-11-15 10:37:45.378019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.054 [2024-11-15 10:37:45.385983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.054 [2024-11-15 10:37:45.388263] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.054 [2024-11-15 10:37:45.388474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.054 "name": "Existed_Raid", 00:09:15.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.054 "strip_size_kb": 64, 00:09:15.054 "state": "configuring", 00:09:15.054 "raid_level": "raid0", 00:09:15.054 "superblock": false, 00:09:15.054 "num_base_bdevs": 2, 00:09:15.054 "num_base_bdevs_discovered": 1, 00:09:15.054 "num_base_bdevs_operational": 2, 00:09:15.054 "base_bdevs_list": [ 00:09:15.054 { 00:09:15.054 "name": "BaseBdev1", 00:09:15.054 "uuid": "08616a59-8cec-4973-9e9e-1cffd5f7d351", 00:09:15.054 "is_configured": true, 00:09:15.054 "data_offset": 0, 00:09:15.054 "data_size": 65536 00:09:15.054 }, 00:09:15.054 { 00:09:15.054 "name": "BaseBdev2", 00:09:15.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.054 "is_configured": false, 00:09:15.054 "data_offset": 0, 00:09:15.054 "data_size": 0 00:09:15.054 } 00:09:15.054 ] 00:09:15.054 }' 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.054 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.639 [2024-11-15 10:37:45.908523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.639 [2024-11-15 10:37:45.908587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:15.639 [2024-11-15 10:37:45.908602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:15.639 [2024-11-15 10:37:45.908920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:15.639 [2024-11-15 10:37:45.909118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:15.639 [2024-11-15 10:37:45.909142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:15.639 [2024-11-15 10:37:45.909485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.639 BaseBdev2 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.639 [ 00:09:15.639 { 00:09:15.639 "name": "BaseBdev2", 00:09:15.639 "aliases": [ 00:09:15.639 "0f1a2427-295d-4965-b4d0-5cf2fc85829c" 00:09:15.639 ], 00:09:15.639 "product_name": "Malloc disk", 00:09:15.639 "block_size": 512, 00:09:15.639 "num_blocks": 65536, 00:09:15.639 "uuid": "0f1a2427-295d-4965-b4d0-5cf2fc85829c", 00:09:15.639 "assigned_rate_limits": { 00:09:15.639 "rw_ios_per_sec": 0, 00:09:15.639 "rw_mbytes_per_sec": 0, 00:09:15.639 "r_mbytes_per_sec": 0, 00:09:15.639 "w_mbytes_per_sec": 0 00:09:15.639 }, 00:09:15.639 "claimed": true, 00:09:15.639 "claim_type": "exclusive_write", 00:09:15.639 "zoned": false, 00:09:15.639 "supported_io_types": { 00:09:15.639 "read": true, 00:09:15.639 "write": true, 00:09:15.639 "unmap": true, 00:09:15.639 "flush": true, 00:09:15.639 "reset": true, 00:09:15.639 "nvme_admin": false, 00:09:15.639 "nvme_io": false, 00:09:15.639 "nvme_io_md": false, 00:09:15.639 "write_zeroes": true, 00:09:15.639 "zcopy": true, 00:09:15.639 "get_zone_info": false, 00:09:15.639 "zone_management": false, 00:09:15.639 "zone_append": false, 00:09:15.639 "compare": false, 00:09:15.639 "compare_and_write": false, 00:09:15.639 "abort": true, 00:09:15.639 "seek_hole": false, 00:09:15.639 "seek_data": false, 00:09:15.639 "copy": true, 00:09:15.639 "nvme_iov_md": false 00:09:15.639 }, 00:09:15.639 "memory_domains": [ 00:09:15.639 { 00:09:15.639 "dma_device_id": "system", 00:09:15.639 "dma_device_type": 1 00:09:15.639 }, 00:09:15.639 { 00:09:15.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.639 "dma_device_type": 2 00:09:15.639 } 00:09:15.639 ], 00:09:15.639 "driver_specific": {} 00:09:15.639 } 00:09:15.639 ] 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.639 "name": "Existed_Raid", 00:09:15.639 "uuid": "0c1fa1c8-73b3-49df-a1d0-9755c31d20a1", 00:09:15.639 "strip_size_kb": 64, 00:09:15.639 "state": "online", 00:09:15.639 "raid_level": "raid0", 00:09:15.639 "superblock": false, 00:09:15.639 "num_base_bdevs": 2, 00:09:15.639 "num_base_bdevs_discovered": 2, 00:09:15.639 "num_base_bdevs_operational": 2, 00:09:15.639 "base_bdevs_list": [ 00:09:15.639 { 00:09:15.639 "name": "BaseBdev1", 00:09:15.639 "uuid": "08616a59-8cec-4973-9e9e-1cffd5f7d351", 00:09:15.639 "is_configured": true, 00:09:15.639 "data_offset": 0, 00:09:15.639 "data_size": 65536 00:09:15.639 }, 00:09:15.639 { 00:09:15.639 "name": "BaseBdev2", 00:09:15.639 "uuid": "0f1a2427-295d-4965-b4d0-5cf2fc85829c", 00:09:15.639 "is_configured": true, 00:09:15.639 "data_offset": 0, 00:09:15.639 "data_size": 65536 00:09:15.639 } 00:09:15.639 ] 00:09:15.639 }' 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.639 10:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.897 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:15.897 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:15.897 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.897 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.897 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.897 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.156 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.156 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.156 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.156 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.156 [2024-11-15 10:37:46.461054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.156 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.156 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.156 "name": "Existed_Raid", 00:09:16.156 "aliases": [ 00:09:16.156 "0c1fa1c8-73b3-49df-a1d0-9755c31d20a1" 00:09:16.156 ], 00:09:16.156 "product_name": "Raid Volume", 00:09:16.156 "block_size": 512, 00:09:16.156 "num_blocks": 131072, 00:09:16.156 "uuid": "0c1fa1c8-73b3-49df-a1d0-9755c31d20a1", 00:09:16.156 "assigned_rate_limits": { 00:09:16.156 "rw_ios_per_sec": 0, 00:09:16.156 "rw_mbytes_per_sec": 0, 00:09:16.156 "r_mbytes_per_sec": 0, 00:09:16.156 "w_mbytes_per_sec": 0 00:09:16.156 }, 00:09:16.156 "claimed": false, 00:09:16.156 "zoned": false, 00:09:16.156 "supported_io_types": { 00:09:16.156 "read": true, 00:09:16.156 "write": true, 00:09:16.156 "unmap": true, 00:09:16.156 "flush": true, 00:09:16.156 "reset": true, 00:09:16.156 "nvme_admin": false, 00:09:16.156 "nvme_io": false, 00:09:16.156 "nvme_io_md": false, 00:09:16.156 "write_zeroes": true, 00:09:16.156 "zcopy": false, 00:09:16.156 "get_zone_info": false, 00:09:16.156 "zone_management": false, 00:09:16.156 "zone_append": false, 00:09:16.156 "compare": false, 00:09:16.156 "compare_and_write": false, 00:09:16.156 "abort": false, 00:09:16.156 "seek_hole": false, 00:09:16.156 "seek_data": false, 00:09:16.156 "copy": false, 00:09:16.156 "nvme_iov_md": false 00:09:16.156 }, 00:09:16.156 "memory_domains": [ 00:09:16.156 { 00:09:16.156 "dma_device_id": "system", 00:09:16.156 "dma_device_type": 1 00:09:16.156 }, 00:09:16.156 { 00:09:16.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.156 "dma_device_type": 2 00:09:16.156 }, 00:09:16.156 { 00:09:16.156 "dma_device_id": "system", 00:09:16.156 "dma_device_type": 1 00:09:16.156 }, 00:09:16.156 { 00:09:16.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.156 "dma_device_type": 2 00:09:16.156 } 00:09:16.156 ], 00:09:16.156 "driver_specific": { 00:09:16.156 "raid": { 00:09:16.156 "uuid": "0c1fa1c8-73b3-49df-a1d0-9755c31d20a1", 00:09:16.156 "strip_size_kb": 64, 00:09:16.156 "state": "online", 00:09:16.156 "raid_level": "raid0", 00:09:16.156 "superblock": false, 00:09:16.156 "num_base_bdevs": 2, 00:09:16.156 "num_base_bdevs_discovered": 2, 00:09:16.156 "num_base_bdevs_operational": 2, 00:09:16.156 "base_bdevs_list": [ 00:09:16.156 { 00:09:16.156 "name": "BaseBdev1", 00:09:16.156 "uuid": "08616a59-8cec-4973-9e9e-1cffd5f7d351", 00:09:16.156 "is_configured": true, 00:09:16.156 "data_offset": 0, 00:09:16.156 "data_size": 65536 00:09:16.156 }, 00:09:16.156 { 00:09:16.156 "name": "BaseBdev2", 00:09:16.156 "uuid": "0f1a2427-295d-4965-b4d0-5cf2fc85829c", 00:09:16.156 "is_configured": true, 00:09:16.156 "data_offset": 0, 00:09:16.156 "data_size": 65536 00:09:16.156 } 00:09:16.156 ] 00:09:16.156 } 00:09:16.156 } 00:09:16.156 }' 00:09:16.156 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.156 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.156 BaseBdev2' 00:09:16.156 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.157 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.415 [2024-11-15 10:37:46.744840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.415 [2024-11-15 10:37:46.744883] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.415 [2024-11-15 10:37:46.744949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.415 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.415 "name": "Existed_Raid", 00:09:16.415 "uuid": "0c1fa1c8-73b3-49df-a1d0-9755c31d20a1", 00:09:16.415 "strip_size_kb": 64, 00:09:16.415 "state": "offline", 00:09:16.415 "raid_level": "raid0", 00:09:16.415 "superblock": false, 00:09:16.415 "num_base_bdevs": 2, 00:09:16.415 "num_base_bdevs_discovered": 1, 00:09:16.415 "num_base_bdevs_operational": 1, 00:09:16.415 "base_bdevs_list": [ 00:09:16.415 { 00:09:16.415 "name": null, 00:09:16.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.415 "is_configured": false, 00:09:16.416 "data_offset": 0, 00:09:16.416 "data_size": 65536 00:09:16.416 }, 00:09:16.416 { 00:09:16.416 "name": "BaseBdev2", 00:09:16.416 "uuid": "0f1a2427-295d-4965-b4d0-5cf2fc85829c", 00:09:16.416 "is_configured": true, 00:09:16.416 "data_offset": 0, 00:09:16.416 "data_size": 65536 00:09:16.416 } 00:09:16.416 ] 00:09:16.416 }' 00:09:16.416 10:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.416 10:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.983 [2024-11-15 10:37:47.404972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:16.983 [2024-11-15 10:37:47.405044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.983 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.241 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.241 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.241 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:17.241 10:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60864 00:09:17.241 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60864 ']' 00:09:17.241 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60864 00:09:17.241 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:17.241 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:17.241 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60864 00:09:17.242 killing process with pid 60864 00:09:17.242 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:17.242 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:17.242 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60864' 00:09:17.242 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60864 00:09:17.242 [2024-11-15 10:37:47.579436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.242 10:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60864 00:09:17.242 [2024-11-15 10:37:47.594002] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.179 00:09:18.179 real 0m5.481s 00:09:18.179 user 0m8.425s 00:09:18.179 sys 0m0.685s 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.179 ************************************ 00:09:18.179 END TEST raid_state_function_test 00:09:18.179 ************************************ 00:09:18.179 10:37:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:18.179 10:37:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:18.179 10:37:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.179 10:37:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.179 ************************************ 00:09:18.179 START TEST raid_state_function_test_sb 00:09:18.179 ************************************ 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61123 00:09:18.179 Process raid pid: 61123 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61123' 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61123 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61123 ']' 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:18.179 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.437 [2024-11-15 10:37:48.747813] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:18.437 [2024-11-15 10:37:48.747988] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.437 [2024-11-15 10:37:48.931566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.697 [2024-11-15 10:37:49.051869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.697 [2024-11-15 10:37:49.240110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.697 [2024-11-15 10:37:49.240151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.264 [2024-11-15 10:37:49.796594] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.264 [2024-11-15 10:37:49.796667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.264 [2024-11-15 10:37:49.796685] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.264 [2024-11-15 10:37:49.796702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.264 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.523 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.523 "name": "Existed_Raid", 00:09:19.523 "uuid": "a95800bf-26a6-46f4-9153-6dd210c32760", 00:09:19.523 "strip_size_kb": 64, 00:09:19.523 "state": "configuring", 00:09:19.523 "raid_level": "raid0", 00:09:19.523 "superblock": true, 00:09:19.523 "num_base_bdevs": 2, 00:09:19.523 "num_base_bdevs_discovered": 0, 00:09:19.523 "num_base_bdevs_operational": 2, 00:09:19.523 "base_bdevs_list": [ 00:09:19.523 { 00:09:19.523 "name": "BaseBdev1", 00:09:19.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.523 "is_configured": false, 00:09:19.523 "data_offset": 0, 00:09:19.523 "data_size": 0 00:09:19.523 }, 00:09:19.523 { 00:09:19.523 "name": "BaseBdev2", 00:09:19.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.523 "is_configured": false, 00:09:19.523 "data_offset": 0, 00:09:19.523 "data_size": 0 00:09:19.523 } 00:09:19.523 ] 00:09:19.523 }' 00:09:19.523 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.523 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.780 [2024-11-15 10:37:50.320658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.780 [2024-11-15 10:37:50.320847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.780 [2024-11-15 10:37:50.328696] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.780 [2024-11-15 10:37:50.328932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.780 [2024-11-15 10:37:50.329124] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.780 [2024-11-15 10:37:50.329254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.780 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.040 [2024-11-15 10:37:50.375861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.040 BaseBdev1 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.040 [ 00:09:20.040 { 00:09:20.040 "name": "BaseBdev1", 00:09:20.040 "aliases": [ 00:09:20.040 "873dc0d3-6310-40ec-b3a2-805ed59b03dd" 00:09:20.040 ], 00:09:20.040 "product_name": "Malloc disk", 00:09:20.040 "block_size": 512, 00:09:20.040 "num_blocks": 65536, 00:09:20.040 "uuid": "873dc0d3-6310-40ec-b3a2-805ed59b03dd", 00:09:20.040 "assigned_rate_limits": { 00:09:20.040 "rw_ios_per_sec": 0, 00:09:20.040 "rw_mbytes_per_sec": 0, 00:09:20.040 "r_mbytes_per_sec": 0, 00:09:20.040 "w_mbytes_per_sec": 0 00:09:20.040 }, 00:09:20.040 "claimed": true, 00:09:20.040 "claim_type": "exclusive_write", 00:09:20.040 "zoned": false, 00:09:20.040 "supported_io_types": { 00:09:20.040 "read": true, 00:09:20.040 "write": true, 00:09:20.040 "unmap": true, 00:09:20.040 "flush": true, 00:09:20.040 "reset": true, 00:09:20.040 "nvme_admin": false, 00:09:20.040 "nvme_io": false, 00:09:20.040 "nvme_io_md": false, 00:09:20.040 "write_zeroes": true, 00:09:20.040 "zcopy": true, 00:09:20.040 "get_zone_info": false, 00:09:20.040 "zone_management": false, 00:09:20.040 "zone_append": false, 00:09:20.040 "compare": false, 00:09:20.040 "compare_and_write": false, 00:09:20.040 "abort": true, 00:09:20.040 "seek_hole": false, 00:09:20.040 "seek_data": false, 00:09:20.040 "copy": true, 00:09:20.040 "nvme_iov_md": false 00:09:20.040 }, 00:09:20.040 "memory_domains": [ 00:09:20.040 { 00:09:20.040 "dma_device_id": "system", 00:09:20.040 "dma_device_type": 1 00:09:20.040 }, 00:09:20.040 { 00:09:20.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.040 "dma_device_type": 2 00:09:20.040 } 00:09:20.040 ], 00:09:20.040 "driver_specific": {} 00:09:20.040 } 00:09:20.040 ] 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.040 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.040 "name": "Existed_Raid", 00:09:20.040 "uuid": "faea0236-3926-428e-927c-b2fab4356151", 00:09:20.040 "strip_size_kb": 64, 00:09:20.040 "state": "configuring", 00:09:20.040 "raid_level": "raid0", 00:09:20.040 "superblock": true, 00:09:20.040 "num_base_bdevs": 2, 00:09:20.040 "num_base_bdevs_discovered": 1, 00:09:20.040 "num_base_bdevs_operational": 2, 00:09:20.040 "base_bdevs_list": [ 00:09:20.040 { 00:09:20.040 "name": "BaseBdev1", 00:09:20.040 "uuid": "873dc0d3-6310-40ec-b3a2-805ed59b03dd", 00:09:20.040 "is_configured": true, 00:09:20.040 "data_offset": 2048, 00:09:20.041 "data_size": 63488 00:09:20.041 }, 00:09:20.041 { 00:09:20.041 "name": "BaseBdev2", 00:09:20.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.041 "is_configured": false, 00:09:20.041 "data_offset": 0, 00:09:20.041 "data_size": 0 00:09:20.041 } 00:09:20.041 ] 00:09:20.041 }' 00:09:20.041 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.041 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.607 [2024-11-15 10:37:50.944053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.607 [2024-11-15 10:37:50.944115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.607 [2024-11-15 10:37:50.952103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.607 [2024-11-15 10:37:50.954572] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.607 [2024-11-15 10:37:50.954756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.607 10:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.607 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.607 "name": "Existed_Raid", 00:09:20.607 "uuid": "7dd3527e-483b-44e7-9295-173d60e14957", 00:09:20.607 "strip_size_kb": 64, 00:09:20.607 "state": "configuring", 00:09:20.607 "raid_level": "raid0", 00:09:20.607 "superblock": true, 00:09:20.607 "num_base_bdevs": 2, 00:09:20.607 "num_base_bdevs_discovered": 1, 00:09:20.607 "num_base_bdevs_operational": 2, 00:09:20.608 "base_bdevs_list": [ 00:09:20.608 { 00:09:20.608 "name": "BaseBdev1", 00:09:20.608 "uuid": "873dc0d3-6310-40ec-b3a2-805ed59b03dd", 00:09:20.608 "is_configured": true, 00:09:20.608 "data_offset": 2048, 00:09:20.608 "data_size": 63488 00:09:20.608 }, 00:09:20.608 { 00:09:20.608 "name": "BaseBdev2", 00:09:20.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.608 "is_configured": false, 00:09:20.608 "data_offset": 0, 00:09:20.608 "data_size": 0 00:09:20.608 } 00:09:20.608 ] 00:09:20.608 }' 00:09:20.608 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.608 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.174 [2024-11-15 10:37:51.551080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.174 [2024-11-15 10:37:51.551415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.174 [2024-11-15 10:37:51.551441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:21.174 BaseBdev2 00:09:21.174 [2024-11-15 10:37:51.551826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:21.174 [2024-11-15 10:37:51.552013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.174 [2024-11-15 10:37:51.552033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:21.174 [2024-11-15 10:37:51.552203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.174 [ 00:09:21.174 { 00:09:21.174 "name": "BaseBdev2", 00:09:21.174 "aliases": [ 00:09:21.174 "03b0e38e-a6f6-4933-a02f-6ec48b20a86b" 00:09:21.174 ], 00:09:21.174 "product_name": "Malloc disk", 00:09:21.174 "block_size": 512, 00:09:21.174 "num_blocks": 65536, 00:09:21.174 "uuid": "03b0e38e-a6f6-4933-a02f-6ec48b20a86b", 00:09:21.174 "assigned_rate_limits": { 00:09:21.174 "rw_ios_per_sec": 0, 00:09:21.174 "rw_mbytes_per_sec": 0, 00:09:21.174 "r_mbytes_per_sec": 0, 00:09:21.174 "w_mbytes_per_sec": 0 00:09:21.174 }, 00:09:21.174 "claimed": true, 00:09:21.174 "claim_type": "exclusive_write", 00:09:21.174 "zoned": false, 00:09:21.174 "supported_io_types": { 00:09:21.174 "read": true, 00:09:21.174 "write": true, 00:09:21.174 "unmap": true, 00:09:21.174 "flush": true, 00:09:21.174 "reset": true, 00:09:21.174 "nvme_admin": false, 00:09:21.174 "nvme_io": false, 00:09:21.174 "nvme_io_md": false, 00:09:21.174 "write_zeroes": true, 00:09:21.174 "zcopy": true, 00:09:21.174 "get_zone_info": false, 00:09:21.174 "zone_management": false, 00:09:21.174 "zone_append": false, 00:09:21.174 "compare": false, 00:09:21.174 "compare_and_write": false, 00:09:21.174 "abort": true, 00:09:21.174 "seek_hole": false, 00:09:21.174 "seek_data": false, 00:09:21.174 "copy": true, 00:09:21.174 "nvme_iov_md": false 00:09:21.174 }, 00:09:21.174 "memory_domains": [ 00:09:21.174 { 00:09:21.174 "dma_device_id": "system", 00:09:21.174 "dma_device_type": 1 00:09:21.174 }, 00:09:21.174 { 00:09:21.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.174 "dma_device_type": 2 00:09:21.174 } 00:09:21.174 ], 00:09:21.174 "driver_specific": {} 00:09:21.174 } 00:09:21.174 ] 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.174 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.175 "name": "Existed_Raid", 00:09:21.175 "uuid": "7dd3527e-483b-44e7-9295-173d60e14957", 00:09:21.175 "strip_size_kb": 64, 00:09:21.175 "state": "online", 00:09:21.175 "raid_level": "raid0", 00:09:21.175 "superblock": true, 00:09:21.175 "num_base_bdevs": 2, 00:09:21.175 "num_base_bdevs_discovered": 2, 00:09:21.175 "num_base_bdevs_operational": 2, 00:09:21.175 "base_bdevs_list": [ 00:09:21.175 { 00:09:21.175 "name": "BaseBdev1", 00:09:21.175 "uuid": "873dc0d3-6310-40ec-b3a2-805ed59b03dd", 00:09:21.175 "is_configured": true, 00:09:21.175 "data_offset": 2048, 00:09:21.175 "data_size": 63488 00:09:21.175 }, 00:09:21.175 { 00:09:21.175 "name": "BaseBdev2", 00:09:21.175 "uuid": "03b0e38e-a6f6-4933-a02f-6ec48b20a86b", 00:09:21.175 "is_configured": true, 00:09:21.175 "data_offset": 2048, 00:09:21.175 "data_size": 63488 00:09:21.175 } 00:09:21.175 ] 00:09:21.175 }' 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.175 10:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.819 [2024-11-15 10:37:52.103652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.819 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.819 "name": "Existed_Raid", 00:09:21.819 "aliases": [ 00:09:21.819 "7dd3527e-483b-44e7-9295-173d60e14957" 00:09:21.819 ], 00:09:21.819 "product_name": "Raid Volume", 00:09:21.819 "block_size": 512, 00:09:21.819 "num_blocks": 126976, 00:09:21.819 "uuid": "7dd3527e-483b-44e7-9295-173d60e14957", 00:09:21.819 "assigned_rate_limits": { 00:09:21.819 "rw_ios_per_sec": 0, 00:09:21.819 "rw_mbytes_per_sec": 0, 00:09:21.819 "r_mbytes_per_sec": 0, 00:09:21.819 "w_mbytes_per_sec": 0 00:09:21.819 }, 00:09:21.819 "claimed": false, 00:09:21.819 "zoned": false, 00:09:21.819 "supported_io_types": { 00:09:21.819 "read": true, 00:09:21.819 "write": true, 00:09:21.819 "unmap": true, 00:09:21.819 "flush": true, 00:09:21.819 "reset": true, 00:09:21.819 "nvme_admin": false, 00:09:21.819 "nvme_io": false, 00:09:21.819 "nvme_io_md": false, 00:09:21.819 "write_zeroes": true, 00:09:21.819 "zcopy": false, 00:09:21.819 "get_zone_info": false, 00:09:21.819 "zone_management": false, 00:09:21.819 "zone_append": false, 00:09:21.819 "compare": false, 00:09:21.819 "compare_and_write": false, 00:09:21.819 "abort": false, 00:09:21.819 "seek_hole": false, 00:09:21.819 "seek_data": false, 00:09:21.819 "copy": false, 00:09:21.819 "nvme_iov_md": false 00:09:21.819 }, 00:09:21.819 "memory_domains": [ 00:09:21.819 { 00:09:21.819 "dma_device_id": "system", 00:09:21.819 "dma_device_type": 1 00:09:21.819 }, 00:09:21.819 { 00:09:21.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.819 "dma_device_type": 2 00:09:21.819 }, 00:09:21.820 { 00:09:21.820 "dma_device_id": "system", 00:09:21.820 "dma_device_type": 1 00:09:21.820 }, 00:09:21.820 { 00:09:21.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.820 "dma_device_type": 2 00:09:21.820 } 00:09:21.820 ], 00:09:21.820 "driver_specific": { 00:09:21.820 "raid": { 00:09:21.820 "uuid": "7dd3527e-483b-44e7-9295-173d60e14957", 00:09:21.820 "strip_size_kb": 64, 00:09:21.820 "state": "online", 00:09:21.820 "raid_level": "raid0", 00:09:21.820 "superblock": true, 00:09:21.820 "num_base_bdevs": 2, 00:09:21.820 "num_base_bdevs_discovered": 2, 00:09:21.820 "num_base_bdevs_operational": 2, 00:09:21.820 "base_bdevs_list": [ 00:09:21.820 { 00:09:21.820 "name": "BaseBdev1", 00:09:21.820 "uuid": "873dc0d3-6310-40ec-b3a2-805ed59b03dd", 00:09:21.820 "is_configured": true, 00:09:21.820 "data_offset": 2048, 00:09:21.820 "data_size": 63488 00:09:21.820 }, 00:09:21.820 { 00:09:21.820 "name": "BaseBdev2", 00:09:21.820 "uuid": "03b0e38e-a6f6-4933-a02f-6ec48b20a86b", 00:09:21.820 "is_configured": true, 00:09:21.820 "data_offset": 2048, 00:09:21.820 "data_size": 63488 00:09:21.820 } 00:09:21.820 ] 00:09:21.820 } 00:09:21.820 } 00:09:21.820 }' 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:21.820 BaseBdev2' 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.820 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.079 [2024-11-15 10:37:52.403447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:22.079 [2024-11-15 10:37:52.403487] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.079 [2024-11-15 10:37:52.403559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.079 "name": "Existed_Raid", 00:09:22.079 "uuid": "7dd3527e-483b-44e7-9295-173d60e14957", 00:09:22.079 "strip_size_kb": 64, 00:09:22.079 "state": "offline", 00:09:22.079 "raid_level": "raid0", 00:09:22.079 "superblock": true, 00:09:22.079 "num_base_bdevs": 2, 00:09:22.079 "num_base_bdevs_discovered": 1, 00:09:22.079 "num_base_bdevs_operational": 1, 00:09:22.079 "base_bdevs_list": [ 00:09:22.079 { 00:09:22.079 "name": null, 00:09:22.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.079 "is_configured": false, 00:09:22.079 "data_offset": 0, 00:09:22.079 "data_size": 63488 00:09:22.079 }, 00:09:22.079 { 00:09:22.079 "name": "BaseBdev2", 00:09:22.079 "uuid": "03b0e38e-a6f6-4933-a02f-6ec48b20a86b", 00:09:22.079 "is_configured": true, 00:09:22.079 "data_offset": 2048, 00:09:22.079 "data_size": 63488 00:09:22.079 } 00:09:22.079 ] 00:09:22.079 }' 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.079 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.647 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:22.647 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.647 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.647 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.647 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.647 10:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.647 10:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.647 [2024-11-15 10:37:53.032226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.647 [2024-11-15 10:37:53.032301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61123 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61123 ']' 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61123 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:22.647 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61123 00:09:22.648 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:22.648 killing process with pid 61123 00:09:22.648 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:22.648 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61123' 00:09:22.648 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61123 00:09:22.648 [2024-11-15 10:37:53.203904] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.648 10:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61123 00:09:22.906 [2024-11-15 10:37:53.218233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.841 ************************************ 00:09:23.841 END TEST raid_state_function_test_sb 00:09:23.841 ************************************ 00:09:23.841 10:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:23.841 00:09:23.841 real 0m5.596s 00:09:23.841 user 0m8.657s 00:09:23.841 sys 0m0.638s 00:09:23.841 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:23.841 10:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.841 10:37:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:23.841 10:37:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:23.841 10:37:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:23.841 10:37:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.841 ************************************ 00:09:23.841 START TEST raid_superblock_test 00:09:23.841 ************************************ 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:23.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61375 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61375 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61375 ']' 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:23.841 10:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.841 [2024-11-15 10:37:54.387305] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:23.841 [2024-11-15 10:37:54.388316] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61375 ] 00:09:24.099 [2024-11-15 10:37:54.585939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.357 [2024-11-15 10:37:54.706053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.357 [2024-11-15 10:37:54.887150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.357 [2024-11-15 10:37:54.887411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.925 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.926 malloc1 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.926 [2024-11-15 10:37:55.406871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:24.926 [2024-11-15 10:37:55.407099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.926 [2024-11-15 10:37:55.407271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:24.926 [2024-11-15 10:37:55.407424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.926 [2024-11-15 10:37:55.410138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.926 [2024-11-15 10:37:55.410313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:24.926 pt1 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.926 malloc2 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.926 [2024-11-15 10:37:55.454549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.926 [2024-11-15 10:37:55.454620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.926 [2024-11-15 10:37:55.454656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:24.926 [2024-11-15 10:37:55.454672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.926 [2024-11-15 10:37:55.457286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.926 [2024-11-15 10:37:55.457495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.926 pt2 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.926 [2024-11-15 10:37:55.462612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.926 [2024-11-15 10:37:55.464869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.926 [2024-11-15 10:37:55.465083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:24.926 [2024-11-15 10:37:55.465104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:24.926 [2024-11-15 10:37:55.465466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:24.926 [2024-11-15 10:37:55.465666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:24.926 [2024-11-15 10:37:55.465687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:24.926 [2024-11-15 10:37:55.465877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.926 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.186 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.186 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.186 "name": "raid_bdev1", 00:09:25.186 "uuid": "b1e99c93-af42-4bd2-9a14-85f3ce275004", 00:09:25.186 "strip_size_kb": 64, 00:09:25.186 "state": "online", 00:09:25.186 "raid_level": "raid0", 00:09:25.186 "superblock": true, 00:09:25.186 "num_base_bdevs": 2, 00:09:25.186 "num_base_bdevs_discovered": 2, 00:09:25.186 "num_base_bdevs_operational": 2, 00:09:25.186 "base_bdevs_list": [ 00:09:25.186 { 00:09:25.186 "name": "pt1", 00:09:25.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.186 "is_configured": true, 00:09:25.186 "data_offset": 2048, 00:09:25.186 "data_size": 63488 00:09:25.186 }, 00:09:25.186 { 00:09:25.186 "name": "pt2", 00:09:25.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.186 "is_configured": true, 00:09:25.186 "data_offset": 2048, 00:09:25.186 "data_size": 63488 00:09:25.186 } 00:09:25.186 ] 00:09:25.186 }' 00:09:25.186 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.186 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.444 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:25.444 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:25.444 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.444 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.444 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.444 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.444 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.444 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.444 10:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.444 10:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.444 [2024-11-15 10:37:55.991072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.703 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.703 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.703 "name": "raid_bdev1", 00:09:25.703 "aliases": [ 00:09:25.703 "b1e99c93-af42-4bd2-9a14-85f3ce275004" 00:09:25.703 ], 00:09:25.703 "product_name": "Raid Volume", 00:09:25.703 "block_size": 512, 00:09:25.703 "num_blocks": 126976, 00:09:25.703 "uuid": "b1e99c93-af42-4bd2-9a14-85f3ce275004", 00:09:25.703 "assigned_rate_limits": { 00:09:25.703 "rw_ios_per_sec": 0, 00:09:25.703 "rw_mbytes_per_sec": 0, 00:09:25.703 "r_mbytes_per_sec": 0, 00:09:25.703 "w_mbytes_per_sec": 0 00:09:25.703 }, 00:09:25.703 "claimed": false, 00:09:25.703 "zoned": false, 00:09:25.703 "supported_io_types": { 00:09:25.703 "read": true, 00:09:25.703 "write": true, 00:09:25.703 "unmap": true, 00:09:25.703 "flush": true, 00:09:25.703 "reset": true, 00:09:25.703 "nvme_admin": false, 00:09:25.703 "nvme_io": false, 00:09:25.703 "nvme_io_md": false, 00:09:25.703 "write_zeroes": true, 00:09:25.703 "zcopy": false, 00:09:25.703 "get_zone_info": false, 00:09:25.703 "zone_management": false, 00:09:25.703 "zone_append": false, 00:09:25.703 "compare": false, 00:09:25.703 "compare_and_write": false, 00:09:25.703 "abort": false, 00:09:25.703 "seek_hole": false, 00:09:25.703 "seek_data": false, 00:09:25.703 "copy": false, 00:09:25.703 "nvme_iov_md": false 00:09:25.703 }, 00:09:25.703 "memory_domains": [ 00:09:25.703 { 00:09:25.703 "dma_device_id": "system", 00:09:25.703 "dma_device_type": 1 00:09:25.703 }, 00:09:25.703 { 00:09:25.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.703 "dma_device_type": 2 00:09:25.703 }, 00:09:25.703 { 00:09:25.703 "dma_device_id": "system", 00:09:25.703 "dma_device_type": 1 00:09:25.703 }, 00:09:25.703 { 00:09:25.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.703 "dma_device_type": 2 00:09:25.703 } 00:09:25.703 ], 00:09:25.703 "driver_specific": { 00:09:25.703 "raid": { 00:09:25.703 "uuid": "b1e99c93-af42-4bd2-9a14-85f3ce275004", 00:09:25.703 "strip_size_kb": 64, 00:09:25.703 "state": "online", 00:09:25.703 "raid_level": "raid0", 00:09:25.703 "superblock": true, 00:09:25.703 "num_base_bdevs": 2, 00:09:25.703 "num_base_bdevs_discovered": 2, 00:09:25.703 "num_base_bdevs_operational": 2, 00:09:25.703 "base_bdevs_list": [ 00:09:25.703 { 00:09:25.703 "name": "pt1", 00:09:25.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.703 "is_configured": true, 00:09:25.703 "data_offset": 2048, 00:09:25.703 "data_size": 63488 00:09:25.703 }, 00:09:25.703 { 00:09:25.703 "name": "pt2", 00:09:25.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.703 "is_configured": true, 00:09:25.703 "data_offset": 2048, 00:09:25.703 "data_size": 63488 00:09:25.703 } 00:09:25.703 ] 00:09:25.703 } 00:09:25.703 } 00:09:25.703 }' 00:09:25.703 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:25.704 pt2' 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.704 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:25.704 [2024-11-15 10:37:56.255128] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.962 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.962 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b1e99c93-af42-4bd2-9a14-85f3ce275004 00:09:25.962 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b1e99c93-af42-4bd2-9a14-85f3ce275004 ']' 00:09:25.962 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.962 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.962 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.962 [2024-11-15 10:37:56.302776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.962 [2024-11-15 10:37:56.302811] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.963 [2024-11-15 10:37:56.302916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.963 [2024-11-15 10:37:56.302982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.963 [2024-11-15 10:37:56.303002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.963 [2024-11-15 10:37:56.446844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:25.963 [2024-11-15 10:37:56.449207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:25.963 [2024-11-15 10:37:56.449294] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:25.963 [2024-11-15 10:37:56.449394] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:25.963 [2024-11-15 10:37:56.449422] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.963 [2024-11-15 10:37:56.449442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:25.963 request: 00:09:25.963 { 00:09:25.963 "name": "raid_bdev1", 00:09:25.963 "raid_level": "raid0", 00:09:25.963 "base_bdevs": [ 00:09:25.963 "malloc1", 00:09:25.963 "malloc2" 00:09:25.963 ], 00:09:25.963 "strip_size_kb": 64, 00:09:25.963 "superblock": false, 00:09:25.963 "method": "bdev_raid_create", 00:09:25.963 "req_id": 1 00:09:25.963 } 00:09:25.963 Got JSON-RPC error response 00:09:25.963 response: 00:09:25.963 { 00:09:25.963 "code": -17, 00:09:25.963 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:25.963 } 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.963 [2024-11-15 10:37:56.506855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:25.963 [2024-11-15 10:37:56.506941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.963 [2024-11-15 10:37:56.506968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:25.963 [2024-11-15 10:37:56.506984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.963 [2024-11-15 10:37:56.509770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.963 [2024-11-15 10:37:56.509949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:25.963 [2024-11-15 10:37:56.510078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:25.963 [2024-11-15 10:37:56.510158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:25.963 pt1 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.963 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.221 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.221 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.221 "name": "raid_bdev1", 00:09:26.221 "uuid": "b1e99c93-af42-4bd2-9a14-85f3ce275004", 00:09:26.221 "strip_size_kb": 64, 00:09:26.221 "state": "configuring", 00:09:26.221 "raid_level": "raid0", 00:09:26.221 "superblock": true, 00:09:26.221 "num_base_bdevs": 2, 00:09:26.221 "num_base_bdevs_discovered": 1, 00:09:26.221 "num_base_bdevs_operational": 2, 00:09:26.221 "base_bdevs_list": [ 00:09:26.221 { 00:09:26.221 "name": "pt1", 00:09:26.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.221 "is_configured": true, 00:09:26.221 "data_offset": 2048, 00:09:26.221 "data_size": 63488 00:09:26.221 }, 00:09:26.221 { 00:09:26.221 "name": null, 00:09:26.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.221 "is_configured": false, 00:09:26.221 "data_offset": 2048, 00:09:26.221 "data_size": 63488 00:09:26.221 } 00:09:26.221 ] 00:09:26.221 }' 00:09:26.221 10:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.221 10:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.479 [2024-11-15 10:37:57.015009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.479 [2024-11-15 10:37:57.015247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.479 [2024-11-15 10:37:57.015299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:26.479 [2024-11-15 10:37:57.015319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.479 [2024-11-15 10:37:57.015904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.479 [2024-11-15 10:37:57.015954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.479 [2024-11-15 10:37:57.016058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:26.479 [2024-11-15 10:37:57.016097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.479 [2024-11-15 10:37:57.016241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:26.479 [2024-11-15 10:37:57.016262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:26.479 [2024-11-15 10:37:57.016599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:26.479 [2024-11-15 10:37:57.016942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:26.479 [2024-11-15 10:37:57.016968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:26.479 [2024-11-15 10:37:57.017146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.479 pt2 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.479 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.737 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.737 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.737 "name": "raid_bdev1", 00:09:26.737 "uuid": "b1e99c93-af42-4bd2-9a14-85f3ce275004", 00:09:26.737 "strip_size_kb": 64, 00:09:26.737 "state": "online", 00:09:26.737 "raid_level": "raid0", 00:09:26.737 "superblock": true, 00:09:26.737 "num_base_bdevs": 2, 00:09:26.737 "num_base_bdevs_discovered": 2, 00:09:26.737 "num_base_bdevs_operational": 2, 00:09:26.737 "base_bdevs_list": [ 00:09:26.737 { 00:09:26.737 "name": "pt1", 00:09:26.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.737 "is_configured": true, 00:09:26.737 "data_offset": 2048, 00:09:26.737 "data_size": 63488 00:09:26.737 }, 00:09:26.737 { 00:09:26.737 "name": "pt2", 00:09:26.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.737 "is_configured": true, 00:09:26.737 "data_offset": 2048, 00:09:26.737 "data_size": 63488 00:09:26.737 } 00:09:26.737 ] 00:09:26.737 }' 00:09:26.738 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.738 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.996 [2024-11-15 10:37:57.531487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.996 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.254 "name": "raid_bdev1", 00:09:27.254 "aliases": [ 00:09:27.254 "b1e99c93-af42-4bd2-9a14-85f3ce275004" 00:09:27.254 ], 00:09:27.254 "product_name": "Raid Volume", 00:09:27.254 "block_size": 512, 00:09:27.254 "num_blocks": 126976, 00:09:27.254 "uuid": "b1e99c93-af42-4bd2-9a14-85f3ce275004", 00:09:27.254 "assigned_rate_limits": { 00:09:27.254 "rw_ios_per_sec": 0, 00:09:27.254 "rw_mbytes_per_sec": 0, 00:09:27.254 "r_mbytes_per_sec": 0, 00:09:27.254 "w_mbytes_per_sec": 0 00:09:27.254 }, 00:09:27.254 "claimed": false, 00:09:27.254 "zoned": false, 00:09:27.254 "supported_io_types": { 00:09:27.254 "read": true, 00:09:27.254 "write": true, 00:09:27.254 "unmap": true, 00:09:27.254 "flush": true, 00:09:27.254 "reset": true, 00:09:27.254 "nvme_admin": false, 00:09:27.254 "nvme_io": false, 00:09:27.254 "nvme_io_md": false, 00:09:27.254 "write_zeroes": true, 00:09:27.254 "zcopy": false, 00:09:27.254 "get_zone_info": false, 00:09:27.254 "zone_management": false, 00:09:27.254 "zone_append": false, 00:09:27.254 "compare": false, 00:09:27.254 "compare_and_write": false, 00:09:27.254 "abort": false, 00:09:27.254 "seek_hole": false, 00:09:27.254 "seek_data": false, 00:09:27.254 "copy": false, 00:09:27.254 "nvme_iov_md": false 00:09:27.254 }, 00:09:27.254 "memory_domains": [ 00:09:27.254 { 00:09:27.254 "dma_device_id": "system", 00:09:27.254 "dma_device_type": 1 00:09:27.254 }, 00:09:27.254 { 00:09:27.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.254 "dma_device_type": 2 00:09:27.254 }, 00:09:27.254 { 00:09:27.254 "dma_device_id": "system", 00:09:27.254 "dma_device_type": 1 00:09:27.254 }, 00:09:27.254 { 00:09:27.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.254 "dma_device_type": 2 00:09:27.254 } 00:09:27.254 ], 00:09:27.254 "driver_specific": { 00:09:27.254 "raid": { 00:09:27.254 "uuid": "b1e99c93-af42-4bd2-9a14-85f3ce275004", 00:09:27.254 "strip_size_kb": 64, 00:09:27.254 "state": "online", 00:09:27.254 "raid_level": "raid0", 00:09:27.254 "superblock": true, 00:09:27.254 "num_base_bdevs": 2, 00:09:27.254 "num_base_bdevs_discovered": 2, 00:09:27.254 "num_base_bdevs_operational": 2, 00:09:27.254 "base_bdevs_list": [ 00:09:27.254 { 00:09:27.254 "name": "pt1", 00:09:27.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.254 "is_configured": true, 00:09:27.254 "data_offset": 2048, 00:09:27.254 "data_size": 63488 00:09:27.254 }, 00:09:27.254 { 00:09:27.254 "name": "pt2", 00:09:27.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.254 "is_configured": true, 00:09:27.254 "data_offset": 2048, 00:09:27.254 "data_size": 63488 00:09:27.254 } 00:09:27.254 ] 00:09:27.254 } 00:09:27.254 } 00:09:27.254 }' 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:27.254 pt2' 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:27.254 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.255 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.255 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.255 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.255 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.255 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.255 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:27.255 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.255 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.255 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.255 [2024-11-15 10:37:57.799535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b1e99c93-af42-4bd2-9a14-85f3ce275004 '!=' b1e99c93-af42-4bd2-9a14-85f3ce275004 ']' 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61375 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61375 ']' 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61375 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61375 00:09:27.513 killing process with pid 61375 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61375' 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61375 00:09:27.513 [2024-11-15 10:37:57.900023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.513 10:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61375 00:09:27.513 [2024-11-15 10:37:57.900145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.513 [2024-11-15 10:37:57.900219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.513 [2024-11-15 10:37:57.900244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:27.771 [2024-11-15 10:37:58.075193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.798 10:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:28.798 00:09:28.798 real 0m4.778s 00:09:28.798 user 0m7.190s 00:09:28.798 sys 0m0.590s 00:09:28.798 ************************************ 00:09:28.798 END TEST raid_superblock_test 00:09:28.798 ************************************ 00:09:28.798 10:37:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.798 10:37:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.798 10:37:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:28.798 10:37:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:28.798 10:37:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.798 10:37:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.798 ************************************ 00:09:28.798 START TEST raid_read_error_test 00:09:28.798 ************************************ 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5GXEuuNDHG 00:09:28.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61593 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61593 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61593 ']' 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:28.798 10:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.798 [2024-11-15 10:37:59.225569] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:28.798 [2024-11-15 10:37:59.225771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:09:29.057 [2024-11-15 10:37:59.403039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.057 [2024-11-15 10:37:59.506162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.316 [2024-11-15 10:37:59.686807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.316 [2024-11-15 10:37:59.686859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.883 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.883 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:29.883 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.883 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:29.883 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.883 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.883 BaseBdev1_malloc 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.884 true 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.884 [2024-11-15 10:38:00.269318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:29.884 [2024-11-15 10:38:00.269404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.884 [2024-11-15 10:38:00.269446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:29.884 [2024-11-15 10:38:00.269465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.884 [2024-11-15 10:38:00.272078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.884 [2024-11-15 10:38:00.272134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:29.884 BaseBdev1 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.884 BaseBdev2_malloc 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.884 true 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.884 [2024-11-15 10:38:00.320880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:29.884 [2024-11-15 10:38:00.321117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.884 [2024-11-15 10:38:00.321155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:29.884 [2024-11-15 10:38:00.321175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.884 [2024-11-15 10:38:00.323829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.884 [2024-11-15 10:38:00.323883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:29.884 BaseBdev2 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.884 [2024-11-15 10:38:00.328964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.884 [2024-11-15 10:38:00.331274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.884 [2024-11-15 10:38:00.331592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:29.884 [2024-11-15 10:38:00.331630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:29.884 [2024-11-15 10:38:00.331931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:29.884 [2024-11-15 10:38:00.332159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:29.884 [2024-11-15 10:38:00.332182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:29.884 [2024-11-15 10:38:00.332421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.884 "name": "raid_bdev1", 00:09:29.884 "uuid": "4456792e-f152-4299-a85e-8078e249f36a", 00:09:29.884 "strip_size_kb": 64, 00:09:29.884 "state": "online", 00:09:29.884 "raid_level": "raid0", 00:09:29.884 "superblock": true, 00:09:29.884 "num_base_bdevs": 2, 00:09:29.884 "num_base_bdevs_discovered": 2, 00:09:29.884 "num_base_bdevs_operational": 2, 00:09:29.884 "base_bdevs_list": [ 00:09:29.884 { 00:09:29.884 "name": "BaseBdev1", 00:09:29.884 "uuid": "f26f5327-27e9-55ce-8b45-4679e3172bd9", 00:09:29.884 "is_configured": true, 00:09:29.884 "data_offset": 2048, 00:09:29.884 "data_size": 63488 00:09:29.884 }, 00:09:29.884 { 00:09:29.884 "name": "BaseBdev2", 00:09:29.884 "uuid": "c1802fa4-57c4-50e0-8411-527f31a2bc7c", 00:09:29.884 "is_configured": true, 00:09:29.884 "data_offset": 2048, 00:09:29.884 "data_size": 63488 00:09:29.884 } 00:09:29.884 ] 00:09:29.884 }' 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.884 10:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.452 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:30.452 10:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:30.452 [2024-11-15 10:38:00.962435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:31.387 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.388 "name": "raid_bdev1", 00:09:31.388 "uuid": "4456792e-f152-4299-a85e-8078e249f36a", 00:09:31.388 "strip_size_kb": 64, 00:09:31.388 "state": "online", 00:09:31.388 "raid_level": "raid0", 00:09:31.388 "superblock": true, 00:09:31.388 "num_base_bdevs": 2, 00:09:31.388 "num_base_bdevs_discovered": 2, 00:09:31.388 "num_base_bdevs_operational": 2, 00:09:31.388 "base_bdevs_list": [ 00:09:31.388 { 00:09:31.388 "name": "BaseBdev1", 00:09:31.388 "uuid": "f26f5327-27e9-55ce-8b45-4679e3172bd9", 00:09:31.388 "is_configured": true, 00:09:31.388 "data_offset": 2048, 00:09:31.388 "data_size": 63488 00:09:31.388 }, 00:09:31.388 { 00:09:31.388 "name": "BaseBdev2", 00:09:31.388 "uuid": "c1802fa4-57c4-50e0-8411-527f31a2bc7c", 00:09:31.388 "is_configured": true, 00:09:31.388 "data_offset": 2048, 00:09:31.388 "data_size": 63488 00:09:31.388 } 00:09:31.388 ] 00:09:31.388 }' 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.388 10:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.954 [2024-11-15 10:38:02.393231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.954 [2024-11-15 10:38:02.393437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.954 [2024-11-15 10:38:02.397244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.954 [2024-11-15 10:38:02.397378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.954 [2024-11-15 10:38:02.397432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.954 [2024-11-15 10:38:02.397453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:31.954 { 00:09:31.954 "results": [ 00:09:31.954 { 00:09:31.954 "job": "raid_bdev1", 00:09:31.954 "core_mask": "0x1", 00:09:31.954 "workload": "randrw", 00:09:31.954 "percentage": 50, 00:09:31.954 "status": "finished", 00:09:31.954 "queue_depth": 1, 00:09:31.954 "io_size": 131072, 00:09:31.954 "runtime": 1.428992, 00:09:31.954 "iops": 10969.270646721605, 00:09:31.954 "mibps": 1371.1588308402006, 00:09:31.954 "io_failed": 1, 00:09:31.954 "io_timeout": 0, 00:09:31.954 "avg_latency_us": 125.33879421930456, 00:09:31.954 "min_latency_us": 44.45090909090909, 00:09:31.954 "max_latency_us": 1899.0545454545454 00:09:31.954 } 00:09:31.954 ], 00:09:31.954 "core_count": 1 00:09:31.954 } 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61593 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61593 ']' 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61593 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61593 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61593' 00:09:31.954 killing process with pid 61593 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61593 00:09:31.954 [2024-11-15 10:38:02.440870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.954 10:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61593 00:09:32.213 [2024-11-15 10:38:02.554619] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.149 10:38:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:33.149 10:38:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5GXEuuNDHG 00:09:33.149 10:38:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:33.149 10:38:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:33.149 ************************************ 00:09:33.149 END TEST raid_read_error_test 00:09:33.149 ************************************ 00:09:33.149 10:38:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:33.149 10:38:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.149 10:38:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.149 10:38:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:33.149 00:09:33.149 real 0m4.495s 00:09:33.149 user 0m5.752s 00:09:33.149 sys 0m0.459s 00:09:33.150 10:38:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.150 10:38:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.150 10:38:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:33.150 10:38:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:33.150 10:38:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.150 10:38:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.150 ************************************ 00:09:33.150 START TEST raid_write_error_test 00:09:33.150 ************************************ 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zyv2GQ9Fwp 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61733 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61733 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61733 ']' 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:33.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:33.150 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.408 [2024-11-15 10:38:03.775255] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:33.408 [2024-11-15 10:38:03.775454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61733 ] 00:09:33.408 [2024-11-15 10:38:03.960679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.666 [2024-11-15 10:38:04.086048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.925 [2024-11-15 10:38:04.299160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.925 [2024-11-15 10:38:04.299237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.492 BaseBdev1_malloc 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.492 true 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.492 [2024-11-15 10:38:04.889226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:34.492 [2024-11-15 10:38:04.889299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.492 [2024-11-15 10:38:04.889330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:34.492 [2024-11-15 10:38:04.889365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.492 [2024-11-15 10:38:04.891981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.492 [2024-11-15 10:38:04.892036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:34.492 BaseBdev1 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.492 BaseBdev2_malloc 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.492 true 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.492 [2024-11-15 10:38:04.949168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:34.492 [2024-11-15 10:38:04.949408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.492 [2024-11-15 10:38:04.949448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:34.492 [2024-11-15 10:38:04.949467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.492 [2024-11-15 10:38:04.952081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.492 [2024-11-15 10:38:04.952137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:34.492 BaseBdev2 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.492 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.492 [2024-11-15 10:38:04.957239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.492 [2024-11-15 10:38:04.959554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.492 [2024-11-15 10:38:04.959809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.492 [2024-11-15 10:38:04.959843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:34.492 [2024-11-15 10:38:04.960151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:34.493 [2024-11-15 10:38:04.960392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.493 [2024-11-15 10:38:04.960416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:34.493 [2024-11-15 10:38:04.960618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.493 10:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.493 "name": "raid_bdev1", 00:09:34.493 "uuid": "656a64aa-8910-4018-8452-d4a6de5252e3", 00:09:34.493 "strip_size_kb": 64, 00:09:34.493 "state": "online", 00:09:34.493 "raid_level": "raid0", 00:09:34.493 "superblock": true, 00:09:34.493 "num_base_bdevs": 2, 00:09:34.493 "num_base_bdevs_discovered": 2, 00:09:34.493 "num_base_bdevs_operational": 2, 00:09:34.493 "base_bdevs_list": [ 00:09:34.493 { 00:09:34.493 "name": "BaseBdev1", 00:09:34.493 "uuid": "0769210e-1382-54f1-9388-a3d33d50a53e", 00:09:34.493 "is_configured": true, 00:09:34.493 "data_offset": 2048, 00:09:34.493 "data_size": 63488 00:09:34.493 }, 00:09:34.493 { 00:09:34.493 "name": "BaseBdev2", 00:09:34.493 "uuid": "0fa0f7a9-5e91-5c16-9374-cb0bc8e50671", 00:09:34.493 "is_configured": true, 00:09:34.493 "data_offset": 2048, 00:09:34.493 "data_size": 63488 00:09:34.493 } 00:09:34.493 ] 00:09:34.493 }' 00:09:34.493 10:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.493 10:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.069 10:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:35.069 10:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:35.069 [2024-11-15 10:38:05.570708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.006 "name": "raid_bdev1", 00:09:36.006 "uuid": "656a64aa-8910-4018-8452-d4a6de5252e3", 00:09:36.006 "strip_size_kb": 64, 00:09:36.006 "state": "online", 00:09:36.006 "raid_level": "raid0", 00:09:36.006 "superblock": true, 00:09:36.006 "num_base_bdevs": 2, 00:09:36.006 "num_base_bdevs_discovered": 2, 00:09:36.006 "num_base_bdevs_operational": 2, 00:09:36.006 "base_bdevs_list": [ 00:09:36.006 { 00:09:36.006 "name": "BaseBdev1", 00:09:36.006 "uuid": "0769210e-1382-54f1-9388-a3d33d50a53e", 00:09:36.006 "is_configured": true, 00:09:36.006 "data_offset": 2048, 00:09:36.006 "data_size": 63488 00:09:36.006 }, 00:09:36.006 { 00:09:36.006 "name": "BaseBdev2", 00:09:36.006 "uuid": "0fa0f7a9-5e91-5c16-9374-cb0bc8e50671", 00:09:36.006 "is_configured": true, 00:09:36.006 "data_offset": 2048, 00:09:36.006 "data_size": 63488 00:09:36.006 } 00:09:36.006 ] 00:09:36.006 }' 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.006 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.574 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:36.574 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.574 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.574 [2024-11-15 10:38:06.976711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.574 [2024-11-15 10:38:06.976754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.574 [2024-11-15 10:38:06.980420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.574 [2024-11-15 10:38:06.980623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.574 [2024-11-15 10:38:06.980714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.574 [2024-11-15 10:38:06.980946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:36.574 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.574 10:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61733 00:09:36.574 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61733 ']' 00:09:36.574 { 00:09:36.574 "results": [ 00:09:36.574 { 00:09:36.574 "job": "raid_bdev1", 00:09:36.574 "core_mask": "0x1", 00:09:36.574 "workload": "randrw", 00:09:36.574 "percentage": 50, 00:09:36.574 "status": "finished", 00:09:36.574 "queue_depth": 1, 00:09:36.574 "io_size": 131072, 00:09:36.574 "runtime": 1.403743, 00:09:36.574 "iops": 11223.564427391624, 00:09:36.574 "mibps": 1402.945553423953, 00:09:36.574 "io_failed": 1, 00:09:36.574 "io_timeout": 0, 00:09:36.574 "avg_latency_us": 122.67125297145097, 00:09:36.574 "min_latency_us": 43.75272727272727, 00:09:36.574 "max_latency_us": 1906.5018181818182 00:09:36.574 } 00:09:36.574 ], 00:09:36.574 "core_count": 1 00:09:36.574 } 00:09:36.574 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61733 00:09:36.574 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:36.574 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:36.575 10:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61733 00:09:36.575 killing process with pid 61733 00:09:36.575 10:38:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:36.575 10:38:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:36.575 10:38:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61733' 00:09:36.575 10:38:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61733 00:09:36.575 10:38:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61733 00:09:36.575 [2024-11-15 10:38:07.011756] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.575 [2024-11-15 10:38:07.125045] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.952 10:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zyv2GQ9Fwp 00:09:37.952 10:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:37.952 10:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:37.952 10:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:37.952 10:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:37.952 10:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.952 10:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:37.952 10:38:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:37.952 00:09:37.952 real 0m4.512s 00:09:37.952 user 0m5.770s 00:09:37.952 sys 0m0.476s 00:09:37.952 ************************************ 00:09:37.952 END TEST raid_write_error_test 00:09:37.952 ************************************ 00:09:37.952 10:38:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:37.952 10:38:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.952 10:38:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:37.952 10:38:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:37.952 10:38:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:37.952 10:38:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:37.952 10:38:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.952 ************************************ 00:09:37.952 START TEST raid_state_function_test 00:09:37.952 ************************************ 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:37.952 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:37.953 Process raid pid: 61877 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61877 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61877' 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61877 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61877 ']' 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:37.953 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.953 [2024-11-15 10:38:08.330724] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:37.953 [2024-11-15 10:38:08.331210] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.211 [2024-11-15 10:38:08.523740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.211 [2024-11-15 10:38:08.653111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.470 [2024-11-15 10:38:08.888251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.470 [2024-11-15 10:38:08.888306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.037 [2024-11-15 10:38:09.316148] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.037 [2024-11-15 10:38:09.316226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.037 [2024-11-15 10:38:09.316244] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.037 [2024-11-15 10:38:09.316260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.037 "name": "Existed_Raid", 00:09:39.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.037 "strip_size_kb": 64, 00:09:39.037 "state": "configuring", 00:09:39.037 "raid_level": "concat", 00:09:39.037 "superblock": false, 00:09:39.037 "num_base_bdevs": 2, 00:09:39.037 "num_base_bdevs_discovered": 0, 00:09:39.037 "num_base_bdevs_operational": 2, 00:09:39.037 "base_bdevs_list": [ 00:09:39.037 { 00:09:39.037 "name": "BaseBdev1", 00:09:39.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.037 "is_configured": false, 00:09:39.037 "data_offset": 0, 00:09:39.037 "data_size": 0 00:09:39.037 }, 00:09:39.037 { 00:09:39.037 "name": "BaseBdev2", 00:09:39.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.037 "is_configured": false, 00:09:39.037 "data_offset": 0, 00:09:39.037 "data_size": 0 00:09:39.037 } 00:09:39.037 ] 00:09:39.037 }' 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.037 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.296 [2024-11-15 10:38:09.832327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.296 [2024-11-15 10:38:09.832422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.296 [2024-11-15 10:38:09.840281] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.296 [2024-11-15 10:38:09.840371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.296 [2024-11-15 10:38:09.840390] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.296 [2024-11-15 10:38:09.840410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.296 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.555 [2024-11-15 10:38:09.885114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.555 BaseBdev1 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.555 [ 00:09:39.555 { 00:09:39.555 "name": "BaseBdev1", 00:09:39.555 "aliases": [ 00:09:39.555 "bf1754e0-b4b5-4ee4-b99d-5194b78bc09d" 00:09:39.555 ], 00:09:39.555 "product_name": "Malloc disk", 00:09:39.555 "block_size": 512, 00:09:39.555 "num_blocks": 65536, 00:09:39.555 "uuid": "bf1754e0-b4b5-4ee4-b99d-5194b78bc09d", 00:09:39.555 "assigned_rate_limits": { 00:09:39.555 "rw_ios_per_sec": 0, 00:09:39.555 "rw_mbytes_per_sec": 0, 00:09:39.555 "r_mbytes_per_sec": 0, 00:09:39.555 "w_mbytes_per_sec": 0 00:09:39.555 }, 00:09:39.555 "claimed": true, 00:09:39.555 "claim_type": "exclusive_write", 00:09:39.555 "zoned": false, 00:09:39.555 "supported_io_types": { 00:09:39.555 "read": true, 00:09:39.555 "write": true, 00:09:39.555 "unmap": true, 00:09:39.555 "flush": true, 00:09:39.555 "reset": true, 00:09:39.555 "nvme_admin": false, 00:09:39.555 "nvme_io": false, 00:09:39.555 "nvme_io_md": false, 00:09:39.555 "write_zeroes": true, 00:09:39.555 "zcopy": true, 00:09:39.555 "get_zone_info": false, 00:09:39.555 "zone_management": false, 00:09:39.555 "zone_append": false, 00:09:39.555 "compare": false, 00:09:39.555 "compare_and_write": false, 00:09:39.555 "abort": true, 00:09:39.555 "seek_hole": false, 00:09:39.555 "seek_data": false, 00:09:39.555 "copy": true, 00:09:39.555 "nvme_iov_md": false 00:09:39.555 }, 00:09:39.555 "memory_domains": [ 00:09:39.555 { 00:09:39.555 "dma_device_id": "system", 00:09:39.555 "dma_device_type": 1 00:09:39.555 }, 00:09:39.555 { 00:09:39.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.555 "dma_device_type": 2 00:09:39.555 } 00:09:39.555 ], 00:09:39.555 "driver_specific": {} 00:09:39.555 } 00:09:39.555 ] 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.555 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.556 "name": "Existed_Raid", 00:09:39.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.556 "strip_size_kb": 64, 00:09:39.556 "state": "configuring", 00:09:39.556 "raid_level": "concat", 00:09:39.556 "superblock": false, 00:09:39.556 "num_base_bdevs": 2, 00:09:39.556 "num_base_bdevs_discovered": 1, 00:09:39.556 "num_base_bdevs_operational": 2, 00:09:39.556 "base_bdevs_list": [ 00:09:39.556 { 00:09:39.556 "name": "BaseBdev1", 00:09:39.556 "uuid": "bf1754e0-b4b5-4ee4-b99d-5194b78bc09d", 00:09:39.556 "is_configured": true, 00:09:39.556 "data_offset": 0, 00:09:39.556 "data_size": 65536 00:09:39.556 }, 00:09:39.556 { 00:09:39.556 "name": "BaseBdev2", 00:09:39.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.556 "is_configured": false, 00:09:39.556 "data_offset": 0, 00:09:39.556 "data_size": 0 00:09:39.556 } 00:09:39.556 ] 00:09:39.556 }' 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.556 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.123 [2024-11-15 10:38:10.441299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.123 [2024-11-15 10:38:10.441396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.123 [2024-11-15 10:38:10.449337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.123 [2024-11-15 10:38:10.451633] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.123 [2024-11-15 10:38:10.451699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.123 "name": "Existed_Raid", 00:09:40.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.123 "strip_size_kb": 64, 00:09:40.123 "state": "configuring", 00:09:40.123 "raid_level": "concat", 00:09:40.123 "superblock": false, 00:09:40.123 "num_base_bdevs": 2, 00:09:40.123 "num_base_bdevs_discovered": 1, 00:09:40.123 "num_base_bdevs_operational": 2, 00:09:40.123 "base_bdevs_list": [ 00:09:40.123 { 00:09:40.123 "name": "BaseBdev1", 00:09:40.123 "uuid": "bf1754e0-b4b5-4ee4-b99d-5194b78bc09d", 00:09:40.123 "is_configured": true, 00:09:40.123 "data_offset": 0, 00:09:40.123 "data_size": 65536 00:09:40.123 }, 00:09:40.123 { 00:09:40.123 "name": "BaseBdev2", 00:09:40.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.123 "is_configured": false, 00:09:40.123 "data_offset": 0, 00:09:40.123 "data_size": 0 00:09:40.123 } 00:09:40.123 ] 00:09:40.123 }' 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.123 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.690 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.690 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.690 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.690 [2024-11-15 10:38:11.007655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.690 [2024-11-15 10:38:11.007722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:40.690 [2024-11-15 10:38:11.007735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:40.690 [2024-11-15 10:38:11.008062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:40.690 [2024-11-15 10:38:11.008259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:40.690 [2024-11-15 10:38:11.008283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:40.690 BaseBdev2 00:09:40.690 [2024-11-15 10:38:11.008617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.690 [ 00:09:40.690 { 00:09:40.690 "name": "BaseBdev2", 00:09:40.690 "aliases": [ 00:09:40.690 "e2eb2b24-7104-4fb5-a005-ef98d90e219d" 00:09:40.690 ], 00:09:40.690 "product_name": "Malloc disk", 00:09:40.690 "block_size": 512, 00:09:40.690 "num_blocks": 65536, 00:09:40.690 "uuid": "e2eb2b24-7104-4fb5-a005-ef98d90e219d", 00:09:40.690 "assigned_rate_limits": { 00:09:40.690 "rw_ios_per_sec": 0, 00:09:40.690 "rw_mbytes_per_sec": 0, 00:09:40.690 "r_mbytes_per_sec": 0, 00:09:40.690 "w_mbytes_per_sec": 0 00:09:40.690 }, 00:09:40.690 "claimed": true, 00:09:40.690 "claim_type": "exclusive_write", 00:09:40.690 "zoned": false, 00:09:40.690 "supported_io_types": { 00:09:40.690 "read": true, 00:09:40.690 "write": true, 00:09:40.690 "unmap": true, 00:09:40.690 "flush": true, 00:09:40.690 "reset": true, 00:09:40.690 "nvme_admin": false, 00:09:40.690 "nvme_io": false, 00:09:40.690 "nvme_io_md": false, 00:09:40.690 "write_zeroes": true, 00:09:40.690 "zcopy": true, 00:09:40.690 "get_zone_info": false, 00:09:40.690 "zone_management": false, 00:09:40.690 "zone_append": false, 00:09:40.690 "compare": false, 00:09:40.690 "compare_and_write": false, 00:09:40.690 "abort": true, 00:09:40.690 "seek_hole": false, 00:09:40.690 "seek_data": false, 00:09:40.690 "copy": true, 00:09:40.690 "nvme_iov_md": false 00:09:40.690 }, 00:09:40.690 "memory_domains": [ 00:09:40.690 { 00:09:40.690 "dma_device_id": "system", 00:09:40.690 "dma_device_type": 1 00:09:40.690 }, 00:09:40.690 { 00:09:40.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.690 "dma_device_type": 2 00:09:40.690 } 00:09:40.690 ], 00:09:40.690 "driver_specific": {} 00:09:40.690 } 00:09:40.690 ] 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.690 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.690 "name": "Existed_Raid", 00:09:40.690 "uuid": "03c8d8c4-3e66-433e-bce5-59bbcbe02caa", 00:09:40.690 "strip_size_kb": 64, 00:09:40.690 "state": "online", 00:09:40.690 "raid_level": "concat", 00:09:40.690 "superblock": false, 00:09:40.690 "num_base_bdevs": 2, 00:09:40.690 "num_base_bdevs_discovered": 2, 00:09:40.690 "num_base_bdevs_operational": 2, 00:09:40.691 "base_bdevs_list": [ 00:09:40.691 { 00:09:40.691 "name": "BaseBdev1", 00:09:40.691 "uuid": "bf1754e0-b4b5-4ee4-b99d-5194b78bc09d", 00:09:40.691 "is_configured": true, 00:09:40.691 "data_offset": 0, 00:09:40.691 "data_size": 65536 00:09:40.691 }, 00:09:40.691 { 00:09:40.691 "name": "BaseBdev2", 00:09:40.691 "uuid": "e2eb2b24-7104-4fb5-a005-ef98d90e219d", 00:09:40.691 "is_configured": true, 00:09:40.691 "data_offset": 0, 00:09:40.691 "data_size": 65536 00:09:40.691 } 00:09:40.691 ] 00:09:40.691 }' 00:09:40.691 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.691 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.274 [2024-11-15 10:38:11.564190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.274 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.274 "name": "Existed_Raid", 00:09:41.274 "aliases": [ 00:09:41.274 "03c8d8c4-3e66-433e-bce5-59bbcbe02caa" 00:09:41.274 ], 00:09:41.274 "product_name": "Raid Volume", 00:09:41.274 "block_size": 512, 00:09:41.274 "num_blocks": 131072, 00:09:41.274 "uuid": "03c8d8c4-3e66-433e-bce5-59bbcbe02caa", 00:09:41.274 "assigned_rate_limits": { 00:09:41.274 "rw_ios_per_sec": 0, 00:09:41.274 "rw_mbytes_per_sec": 0, 00:09:41.274 "r_mbytes_per_sec": 0, 00:09:41.274 "w_mbytes_per_sec": 0 00:09:41.274 }, 00:09:41.274 "claimed": false, 00:09:41.274 "zoned": false, 00:09:41.274 "supported_io_types": { 00:09:41.274 "read": true, 00:09:41.274 "write": true, 00:09:41.274 "unmap": true, 00:09:41.274 "flush": true, 00:09:41.274 "reset": true, 00:09:41.274 "nvme_admin": false, 00:09:41.274 "nvme_io": false, 00:09:41.274 "nvme_io_md": false, 00:09:41.274 "write_zeroes": true, 00:09:41.274 "zcopy": false, 00:09:41.274 "get_zone_info": false, 00:09:41.274 "zone_management": false, 00:09:41.274 "zone_append": false, 00:09:41.274 "compare": false, 00:09:41.274 "compare_and_write": false, 00:09:41.274 "abort": false, 00:09:41.274 "seek_hole": false, 00:09:41.274 "seek_data": false, 00:09:41.274 "copy": false, 00:09:41.274 "nvme_iov_md": false 00:09:41.274 }, 00:09:41.274 "memory_domains": [ 00:09:41.274 { 00:09:41.274 "dma_device_id": "system", 00:09:41.274 "dma_device_type": 1 00:09:41.274 }, 00:09:41.275 { 00:09:41.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.275 "dma_device_type": 2 00:09:41.275 }, 00:09:41.275 { 00:09:41.275 "dma_device_id": "system", 00:09:41.275 "dma_device_type": 1 00:09:41.275 }, 00:09:41.275 { 00:09:41.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.275 "dma_device_type": 2 00:09:41.275 } 00:09:41.275 ], 00:09:41.275 "driver_specific": { 00:09:41.275 "raid": { 00:09:41.275 "uuid": "03c8d8c4-3e66-433e-bce5-59bbcbe02caa", 00:09:41.275 "strip_size_kb": 64, 00:09:41.275 "state": "online", 00:09:41.275 "raid_level": "concat", 00:09:41.275 "superblock": false, 00:09:41.275 "num_base_bdevs": 2, 00:09:41.275 "num_base_bdevs_discovered": 2, 00:09:41.275 "num_base_bdevs_operational": 2, 00:09:41.275 "base_bdevs_list": [ 00:09:41.275 { 00:09:41.275 "name": "BaseBdev1", 00:09:41.275 "uuid": "bf1754e0-b4b5-4ee4-b99d-5194b78bc09d", 00:09:41.275 "is_configured": true, 00:09:41.275 "data_offset": 0, 00:09:41.275 "data_size": 65536 00:09:41.275 }, 00:09:41.275 { 00:09:41.275 "name": "BaseBdev2", 00:09:41.275 "uuid": "e2eb2b24-7104-4fb5-a005-ef98d90e219d", 00:09:41.275 "is_configured": true, 00:09:41.275 "data_offset": 0, 00:09:41.275 "data_size": 65536 00:09:41.275 } 00:09:41.275 ] 00:09:41.275 } 00:09:41.275 } 00:09:41.275 }' 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:41.275 BaseBdev2' 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.275 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.275 [2024-11-15 10:38:11.811981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.275 [2024-11-15 10:38:11.812024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.275 [2024-11-15 10:38:11.812091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.532 "name": "Existed_Raid", 00:09:41.532 "uuid": "03c8d8c4-3e66-433e-bce5-59bbcbe02caa", 00:09:41.532 "strip_size_kb": 64, 00:09:41.532 "state": "offline", 00:09:41.532 "raid_level": "concat", 00:09:41.532 "superblock": false, 00:09:41.532 "num_base_bdevs": 2, 00:09:41.532 "num_base_bdevs_discovered": 1, 00:09:41.532 "num_base_bdevs_operational": 1, 00:09:41.532 "base_bdevs_list": [ 00:09:41.532 { 00:09:41.532 "name": null, 00:09:41.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.532 "is_configured": false, 00:09:41.532 "data_offset": 0, 00:09:41.532 "data_size": 65536 00:09:41.532 }, 00:09:41.532 { 00:09:41.532 "name": "BaseBdev2", 00:09:41.532 "uuid": "e2eb2b24-7104-4fb5-a005-ef98d90e219d", 00:09:41.532 "is_configured": true, 00:09:41.532 "data_offset": 0, 00:09:41.532 "data_size": 65536 00:09:41.532 } 00:09:41.532 ] 00:09:41.532 }' 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.532 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.096 [2024-11-15 10:38:12.456158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.096 [2024-11-15 10:38:12.456228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:42.096 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61877 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61877 ']' 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61877 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61877 00:09:42.097 killing process with pid 61877 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61877' 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61877 00:09:42.097 [2024-11-15 10:38:12.632773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.097 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61877 00:09:42.097 [2024-11-15 10:38:12.647082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:43.470 00:09:43.470 real 0m5.418s 00:09:43.470 user 0m8.272s 00:09:43.470 sys 0m0.704s 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.470 ************************************ 00:09:43.470 END TEST raid_state_function_test 00:09:43.470 ************************************ 00:09:43.470 10:38:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:43.470 10:38:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:43.470 10:38:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:43.470 10:38:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.470 ************************************ 00:09:43.470 START TEST raid_state_function_test_sb 00:09:43.470 ************************************ 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62130 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62130' 00:09:43.470 Process raid pid: 62130 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62130 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62130 ']' 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:43.470 10:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.470 [2024-11-15 10:38:13.789096] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:43.470 [2024-11-15 10:38:13.789267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.470 [2024-11-15 10:38:13.972290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.728 [2024-11-15 10:38:14.099724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.729 [2024-11-15 10:38:14.282109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.729 [2024-11-15 10:38:14.282365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.294 10:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:44.294 10:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:44.294 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:44.294 10:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.294 10:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.294 [2024-11-15 10:38:14.810164] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.294 [2024-11-15 10:38:14.810227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.294 [2024-11-15 10:38:14.810244] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.294 [2024-11-15 10:38:14.810259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.295 10:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.553 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.553 "name": "Existed_Raid", 00:09:44.553 "uuid": "67b5d87a-0dda-4b01-8a5c-b00b6d935634", 00:09:44.553 "strip_size_kb": 64, 00:09:44.553 "state": "configuring", 00:09:44.553 "raid_level": "concat", 00:09:44.553 "superblock": true, 00:09:44.553 "num_base_bdevs": 2, 00:09:44.553 "num_base_bdevs_discovered": 0, 00:09:44.553 "num_base_bdevs_operational": 2, 00:09:44.553 "base_bdevs_list": [ 00:09:44.553 { 00:09:44.553 "name": "BaseBdev1", 00:09:44.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.553 "is_configured": false, 00:09:44.553 "data_offset": 0, 00:09:44.553 "data_size": 0 00:09:44.553 }, 00:09:44.553 { 00:09:44.553 "name": "BaseBdev2", 00:09:44.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.553 "is_configured": false, 00:09:44.553 "data_offset": 0, 00:09:44.553 "data_size": 0 00:09:44.553 } 00:09:44.553 ] 00:09:44.553 }' 00:09:44.553 10:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.553 10:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.811 [2024-11-15 10:38:15.282218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.811 [2024-11-15 10:38:15.282260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.811 [2024-11-15 10:38:15.290209] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.811 [2024-11-15 10:38:15.290260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.811 [2024-11-15 10:38:15.290275] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.811 [2024-11-15 10:38:15.290292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.811 [2024-11-15 10:38:15.330161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.811 BaseBdev1 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.811 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.811 [ 00:09:44.811 { 00:09:44.811 "name": "BaseBdev1", 00:09:44.811 "aliases": [ 00:09:44.811 "ba471d1b-6117-41c0-80ad-c15d584af747" 00:09:44.811 ], 00:09:44.811 "product_name": "Malloc disk", 00:09:44.811 "block_size": 512, 00:09:44.811 "num_blocks": 65536, 00:09:44.811 "uuid": "ba471d1b-6117-41c0-80ad-c15d584af747", 00:09:44.811 "assigned_rate_limits": { 00:09:44.811 "rw_ios_per_sec": 0, 00:09:44.811 "rw_mbytes_per_sec": 0, 00:09:44.811 "r_mbytes_per_sec": 0, 00:09:44.811 "w_mbytes_per_sec": 0 00:09:44.811 }, 00:09:44.811 "claimed": true, 00:09:44.811 "claim_type": "exclusive_write", 00:09:44.811 "zoned": false, 00:09:44.811 "supported_io_types": { 00:09:44.811 "read": true, 00:09:44.811 "write": true, 00:09:44.811 "unmap": true, 00:09:44.811 "flush": true, 00:09:44.811 "reset": true, 00:09:44.811 "nvme_admin": false, 00:09:44.811 "nvme_io": false, 00:09:44.811 "nvme_io_md": false, 00:09:44.811 "write_zeroes": true, 00:09:44.811 "zcopy": true, 00:09:44.812 "get_zone_info": false, 00:09:44.812 "zone_management": false, 00:09:44.812 "zone_append": false, 00:09:44.812 "compare": false, 00:09:44.812 "compare_and_write": false, 00:09:44.812 "abort": true, 00:09:44.812 "seek_hole": false, 00:09:44.812 "seek_data": false, 00:09:44.812 "copy": true, 00:09:44.812 "nvme_iov_md": false 00:09:44.812 }, 00:09:44.812 "memory_domains": [ 00:09:44.812 { 00:09:44.812 "dma_device_id": "system", 00:09:44.812 "dma_device_type": 1 00:09:44.812 }, 00:09:44.812 { 00:09:44.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.812 "dma_device_type": 2 00:09:44.812 } 00:09:44.812 ], 00:09:44.812 "driver_specific": {} 00:09:44.812 } 00:09:44.812 ] 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.812 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.070 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.070 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.070 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.070 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.070 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.070 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.070 "name": "Existed_Raid", 00:09:45.070 "uuid": "54fe9da6-7701-41ce-be2b-4cf35de46833", 00:09:45.070 "strip_size_kb": 64, 00:09:45.070 "state": "configuring", 00:09:45.070 "raid_level": "concat", 00:09:45.070 "superblock": true, 00:09:45.070 "num_base_bdevs": 2, 00:09:45.070 "num_base_bdevs_discovered": 1, 00:09:45.070 "num_base_bdevs_operational": 2, 00:09:45.070 "base_bdevs_list": [ 00:09:45.070 { 00:09:45.070 "name": "BaseBdev1", 00:09:45.070 "uuid": "ba471d1b-6117-41c0-80ad-c15d584af747", 00:09:45.070 "is_configured": true, 00:09:45.070 "data_offset": 2048, 00:09:45.070 "data_size": 63488 00:09:45.070 }, 00:09:45.070 { 00:09:45.070 "name": "BaseBdev2", 00:09:45.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.070 "is_configured": false, 00:09:45.070 "data_offset": 0, 00:09:45.070 "data_size": 0 00:09:45.070 } 00:09:45.070 ] 00:09:45.070 }' 00:09:45.070 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.070 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.329 [2024-11-15 10:38:15.862343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.329 [2024-11-15 10:38:15.862418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.329 [2024-11-15 10:38:15.870409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.329 [2024-11-15 10:38:15.872707] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.329 [2024-11-15 10:38:15.872759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.329 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.587 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.587 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.587 "name": "Existed_Raid", 00:09:45.587 "uuid": "39a56511-8c4c-407a-b495-362a366eb4e2", 00:09:45.587 "strip_size_kb": 64, 00:09:45.587 "state": "configuring", 00:09:45.587 "raid_level": "concat", 00:09:45.587 "superblock": true, 00:09:45.587 "num_base_bdevs": 2, 00:09:45.587 "num_base_bdevs_discovered": 1, 00:09:45.587 "num_base_bdevs_operational": 2, 00:09:45.587 "base_bdevs_list": [ 00:09:45.587 { 00:09:45.587 "name": "BaseBdev1", 00:09:45.587 "uuid": "ba471d1b-6117-41c0-80ad-c15d584af747", 00:09:45.587 "is_configured": true, 00:09:45.587 "data_offset": 2048, 00:09:45.587 "data_size": 63488 00:09:45.587 }, 00:09:45.587 { 00:09:45.587 "name": "BaseBdev2", 00:09:45.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.587 "is_configured": false, 00:09:45.587 "data_offset": 0, 00:09:45.587 "data_size": 0 00:09:45.587 } 00:09:45.587 ] 00:09:45.587 }' 00:09:45.587 10:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.587 10:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.846 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.846 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.846 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.105 [2024-11-15 10:38:16.424267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.105 [2024-11-15 10:38:16.424806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.105 [2024-11-15 10:38:16.424832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:46.105 [2024-11-15 10:38:16.425154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:46.105 BaseBdev2 00:09:46.105 [2024-11-15 10:38:16.425364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.105 [2024-11-15 10:38:16.425387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:46.105 [2024-11-15 10:38:16.425558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.105 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.105 [ 00:09:46.105 { 00:09:46.105 "name": "BaseBdev2", 00:09:46.105 "aliases": [ 00:09:46.105 "28f05f16-f388-4e4f-b554-d55b2ffd289a" 00:09:46.105 ], 00:09:46.105 "product_name": "Malloc disk", 00:09:46.105 "block_size": 512, 00:09:46.105 "num_blocks": 65536, 00:09:46.105 "uuid": "28f05f16-f388-4e4f-b554-d55b2ffd289a", 00:09:46.105 "assigned_rate_limits": { 00:09:46.105 "rw_ios_per_sec": 0, 00:09:46.105 "rw_mbytes_per_sec": 0, 00:09:46.105 "r_mbytes_per_sec": 0, 00:09:46.105 "w_mbytes_per_sec": 0 00:09:46.105 }, 00:09:46.105 "claimed": true, 00:09:46.105 "claim_type": "exclusive_write", 00:09:46.105 "zoned": false, 00:09:46.105 "supported_io_types": { 00:09:46.105 "read": true, 00:09:46.105 "write": true, 00:09:46.105 "unmap": true, 00:09:46.105 "flush": true, 00:09:46.105 "reset": true, 00:09:46.105 "nvme_admin": false, 00:09:46.105 "nvme_io": false, 00:09:46.105 "nvme_io_md": false, 00:09:46.105 "write_zeroes": true, 00:09:46.105 "zcopy": true, 00:09:46.105 "get_zone_info": false, 00:09:46.105 "zone_management": false, 00:09:46.105 "zone_append": false, 00:09:46.105 "compare": false, 00:09:46.105 "compare_and_write": false, 00:09:46.105 "abort": true, 00:09:46.105 "seek_hole": false, 00:09:46.105 "seek_data": false, 00:09:46.105 "copy": true, 00:09:46.105 "nvme_iov_md": false 00:09:46.105 }, 00:09:46.105 "memory_domains": [ 00:09:46.105 { 00:09:46.105 "dma_device_id": "system", 00:09:46.105 "dma_device_type": 1 00:09:46.106 }, 00:09:46.106 { 00:09:46.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.106 "dma_device_type": 2 00:09:46.106 } 00:09:46.106 ], 00:09:46.106 "driver_specific": {} 00:09:46.106 } 00:09:46.106 ] 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.106 "name": "Existed_Raid", 00:09:46.106 "uuid": "39a56511-8c4c-407a-b495-362a366eb4e2", 00:09:46.106 "strip_size_kb": 64, 00:09:46.106 "state": "online", 00:09:46.106 "raid_level": "concat", 00:09:46.106 "superblock": true, 00:09:46.106 "num_base_bdevs": 2, 00:09:46.106 "num_base_bdevs_discovered": 2, 00:09:46.106 "num_base_bdevs_operational": 2, 00:09:46.106 "base_bdevs_list": [ 00:09:46.106 { 00:09:46.106 "name": "BaseBdev1", 00:09:46.106 "uuid": "ba471d1b-6117-41c0-80ad-c15d584af747", 00:09:46.106 "is_configured": true, 00:09:46.106 "data_offset": 2048, 00:09:46.106 "data_size": 63488 00:09:46.106 }, 00:09:46.106 { 00:09:46.106 "name": "BaseBdev2", 00:09:46.106 "uuid": "28f05f16-f388-4e4f-b554-d55b2ffd289a", 00:09:46.106 "is_configured": true, 00:09:46.106 "data_offset": 2048, 00:09:46.106 "data_size": 63488 00:09:46.106 } 00:09:46.106 ] 00:09:46.106 }' 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.106 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.673 [2024-11-15 10:38:16.964793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.673 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.673 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.673 "name": "Existed_Raid", 00:09:46.673 "aliases": [ 00:09:46.673 "39a56511-8c4c-407a-b495-362a366eb4e2" 00:09:46.673 ], 00:09:46.673 "product_name": "Raid Volume", 00:09:46.673 "block_size": 512, 00:09:46.673 "num_blocks": 126976, 00:09:46.673 "uuid": "39a56511-8c4c-407a-b495-362a366eb4e2", 00:09:46.673 "assigned_rate_limits": { 00:09:46.673 "rw_ios_per_sec": 0, 00:09:46.673 "rw_mbytes_per_sec": 0, 00:09:46.673 "r_mbytes_per_sec": 0, 00:09:46.673 "w_mbytes_per_sec": 0 00:09:46.673 }, 00:09:46.673 "claimed": false, 00:09:46.673 "zoned": false, 00:09:46.673 "supported_io_types": { 00:09:46.673 "read": true, 00:09:46.673 "write": true, 00:09:46.673 "unmap": true, 00:09:46.673 "flush": true, 00:09:46.673 "reset": true, 00:09:46.673 "nvme_admin": false, 00:09:46.673 "nvme_io": false, 00:09:46.673 "nvme_io_md": false, 00:09:46.673 "write_zeroes": true, 00:09:46.673 "zcopy": false, 00:09:46.673 "get_zone_info": false, 00:09:46.673 "zone_management": false, 00:09:46.673 "zone_append": false, 00:09:46.673 "compare": false, 00:09:46.673 "compare_and_write": false, 00:09:46.673 "abort": false, 00:09:46.673 "seek_hole": false, 00:09:46.673 "seek_data": false, 00:09:46.673 "copy": false, 00:09:46.673 "nvme_iov_md": false 00:09:46.673 }, 00:09:46.673 "memory_domains": [ 00:09:46.673 { 00:09:46.673 "dma_device_id": "system", 00:09:46.673 "dma_device_type": 1 00:09:46.673 }, 00:09:46.673 { 00:09:46.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.673 "dma_device_type": 2 00:09:46.673 }, 00:09:46.673 { 00:09:46.673 "dma_device_id": "system", 00:09:46.673 "dma_device_type": 1 00:09:46.673 }, 00:09:46.673 { 00:09:46.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.673 "dma_device_type": 2 00:09:46.673 } 00:09:46.673 ], 00:09:46.673 "driver_specific": { 00:09:46.673 "raid": { 00:09:46.673 "uuid": "39a56511-8c4c-407a-b495-362a366eb4e2", 00:09:46.673 "strip_size_kb": 64, 00:09:46.673 "state": "online", 00:09:46.673 "raid_level": "concat", 00:09:46.673 "superblock": true, 00:09:46.673 "num_base_bdevs": 2, 00:09:46.673 "num_base_bdevs_discovered": 2, 00:09:46.673 "num_base_bdevs_operational": 2, 00:09:46.673 "base_bdevs_list": [ 00:09:46.673 { 00:09:46.673 "name": "BaseBdev1", 00:09:46.673 "uuid": "ba471d1b-6117-41c0-80ad-c15d584af747", 00:09:46.673 "is_configured": true, 00:09:46.673 "data_offset": 2048, 00:09:46.673 "data_size": 63488 00:09:46.673 }, 00:09:46.673 { 00:09:46.673 "name": "BaseBdev2", 00:09:46.673 "uuid": "28f05f16-f388-4e4f-b554-d55b2ffd289a", 00:09:46.673 "is_configured": true, 00:09:46.673 "data_offset": 2048, 00:09:46.673 "data_size": 63488 00:09:46.673 } 00:09:46.673 ] 00:09:46.673 } 00:09:46.673 } 00:09:46.673 }' 00:09:46.673 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.673 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:46.673 BaseBdev2' 00:09:46.673 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.673 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.673 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.673 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.674 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.674 [2024-11-15 10:38:17.228588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.674 [2024-11-15 10:38:17.228631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.674 [2024-11-15 10:38:17.228697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.932 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.933 "name": "Existed_Raid", 00:09:46.933 "uuid": "39a56511-8c4c-407a-b495-362a366eb4e2", 00:09:46.933 "strip_size_kb": 64, 00:09:46.933 "state": "offline", 00:09:46.933 "raid_level": "concat", 00:09:46.933 "superblock": true, 00:09:46.933 "num_base_bdevs": 2, 00:09:46.933 "num_base_bdevs_discovered": 1, 00:09:46.933 "num_base_bdevs_operational": 1, 00:09:46.933 "base_bdevs_list": [ 00:09:46.933 { 00:09:46.933 "name": null, 00:09:46.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.933 "is_configured": false, 00:09:46.933 "data_offset": 0, 00:09:46.933 "data_size": 63488 00:09:46.933 }, 00:09:46.933 { 00:09:46.933 "name": "BaseBdev2", 00:09:46.933 "uuid": "28f05f16-f388-4e4f-b554-d55b2ffd289a", 00:09:46.933 "is_configured": true, 00:09:46.933 "data_offset": 2048, 00:09:46.933 "data_size": 63488 00:09:46.933 } 00:09:46.933 ] 00:09:46.933 }' 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.933 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.524 [2024-11-15 10:38:17.876579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.524 [2024-11-15 10:38:17.876651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.524 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62130 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62130 ']' 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62130 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62130 00:09:47.524 killing process with pid 62130 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62130' 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62130 00:09:47.524 [2024-11-15 10:38:18.045824] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.524 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62130 00:09:47.524 [2024-11-15 10:38:18.060501] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.925 ************************************ 00:09:48.925 END TEST raid_state_function_test_sb 00:09:48.925 ************************************ 00:09:48.925 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:48.925 00:09:48.925 real 0m5.370s 00:09:48.925 user 0m8.219s 00:09:48.925 sys 0m0.677s 00:09:48.925 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:48.925 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.925 10:38:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:48.925 10:38:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:48.925 10:38:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.925 10:38:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.925 ************************************ 00:09:48.925 START TEST raid_superblock_test 00:09:48.925 ************************************ 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62386 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62386 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62386 ']' 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:48.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:48.925 10:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.925 [2024-11-15 10:38:19.209684] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:48.925 [2024-11-15 10:38:19.209871] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62386 ] 00:09:48.925 [2024-11-15 10:38:19.398241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.184 [2024-11-15 10:38:19.524462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.184 [2024-11-15 10:38:19.729613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.184 [2024-11-15 10:38:19.729686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.751 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.010 malloc1 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.010 [2024-11-15 10:38:20.318682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.010 [2024-11-15 10:38:20.318979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.010 [2024-11-15 10:38:20.319025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:50.010 [2024-11-15 10:38:20.319042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.010 [2024-11-15 10:38:20.321802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.010 [2024-11-15 10:38:20.321852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.010 pt1 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.010 malloc2 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.010 [2024-11-15 10:38:20.362561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.010 [2024-11-15 10:38:20.362643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.010 [2024-11-15 10:38:20.362683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:50.010 [2024-11-15 10:38:20.362697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.010 [2024-11-15 10:38:20.365398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.010 [2024-11-15 10:38:20.365446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.010 pt2 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.010 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.010 [2024-11-15 10:38:20.374637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:50.011 [2024-11-15 10:38:20.376923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.011 [2024-11-15 10:38:20.377291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:50.011 [2024-11-15 10:38:20.377317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:50.011 [2024-11-15 10:38:20.377679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:50.011 [2024-11-15 10:38:20.377874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:50.011 [2024-11-15 10:38:20.377895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:50.011 [2024-11-15 10:38:20.378117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.011 "name": "raid_bdev1", 00:09:50.011 "uuid": "f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3", 00:09:50.011 "strip_size_kb": 64, 00:09:50.011 "state": "online", 00:09:50.011 "raid_level": "concat", 00:09:50.011 "superblock": true, 00:09:50.011 "num_base_bdevs": 2, 00:09:50.011 "num_base_bdevs_discovered": 2, 00:09:50.011 "num_base_bdevs_operational": 2, 00:09:50.011 "base_bdevs_list": [ 00:09:50.011 { 00:09:50.011 "name": "pt1", 00:09:50.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.011 "is_configured": true, 00:09:50.011 "data_offset": 2048, 00:09:50.011 "data_size": 63488 00:09:50.011 }, 00:09:50.011 { 00:09:50.011 "name": "pt2", 00:09:50.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.011 "is_configured": true, 00:09:50.011 "data_offset": 2048, 00:09:50.011 "data_size": 63488 00:09:50.011 } 00:09:50.011 ] 00:09:50.011 }' 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.011 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.578 [2024-11-15 10:38:20.879046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.578 "name": "raid_bdev1", 00:09:50.578 "aliases": [ 00:09:50.578 "f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3" 00:09:50.578 ], 00:09:50.578 "product_name": "Raid Volume", 00:09:50.578 "block_size": 512, 00:09:50.578 "num_blocks": 126976, 00:09:50.578 "uuid": "f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3", 00:09:50.578 "assigned_rate_limits": { 00:09:50.578 "rw_ios_per_sec": 0, 00:09:50.578 "rw_mbytes_per_sec": 0, 00:09:50.578 "r_mbytes_per_sec": 0, 00:09:50.578 "w_mbytes_per_sec": 0 00:09:50.578 }, 00:09:50.578 "claimed": false, 00:09:50.578 "zoned": false, 00:09:50.578 "supported_io_types": { 00:09:50.578 "read": true, 00:09:50.578 "write": true, 00:09:50.578 "unmap": true, 00:09:50.578 "flush": true, 00:09:50.578 "reset": true, 00:09:50.578 "nvme_admin": false, 00:09:50.578 "nvme_io": false, 00:09:50.578 "nvme_io_md": false, 00:09:50.578 "write_zeroes": true, 00:09:50.578 "zcopy": false, 00:09:50.578 "get_zone_info": false, 00:09:50.578 "zone_management": false, 00:09:50.578 "zone_append": false, 00:09:50.578 "compare": false, 00:09:50.578 "compare_and_write": false, 00:09:50.578 "abort": false, 00:09:50.578 "seek_hole": false, 00:09:50.578 "seek_data": false, 00:09:50.578 "copy": false, 00:09:50.578 "nvme_iov_md": false 00:09:50.578 }, 00:09:50.578 "memory_domains": [ 00:09:50.578 { 00:09:50.578 "dma_device_id": "system", 00:09:50.578 "dma_device_type": 1 00:09:50.578 }, 00:09:50.578 { 00:09:50.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.578 "dma_device_type": 2 00:09:50.578 }, 00:09:50.578 { 00:09:50.578 "dma_device_id": "system", 00:09:50.578 "dma_device_type": 1 00:09:50.578 }, 00:09:50.578 { 00:09:50.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.578 "dma_device_type": 2 00:09:50.578 } 00:09:50.578 ], 00:09:50.578 "driver_specific": { 00:09:50.578 "raid": { 00:09:50.578 "uuid": "f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3", 00:09:50.578 "strip_size_kb": 64, 00:09:50.578 "state": "online", 00:09:50.578 "raid_level": "concat", 00:09:50.578 "superblock": true, 00:09:50.578 "num_base_bdevs": 2, 00:09:50.578 "num_base_bdevs_discovered": 2, 00:09:50.578 "num_base_bdevs_operational": 2, 00:09:50.578 "base_bdevs_list": [ 00:09:50.578 { 00:09:50.578 "name": "pt1", 00:09:50.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.578 "is_configured": true, 00:09:50.578 "data_offset": 2048, 00:09:50.578 "data_size": 63488 00:09:50.578 }, 00:09:50.578 { 00:09:50.578 "name": "pt2", 00:09:50.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.578 "is_configured": true, 00:09:50.578 "data_offset": 2048, 00:09:50.578 "data_size": 63488 00:09:50.578 } 00:09:50.578 ] 00:09:50.578 } 00:09:50.578 } 00:09:50.578 }' 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:50.578 pt2' 00:09:50.578 10:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.578 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.578 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.578 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:50.578 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.578 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.578 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.578 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.578 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.578 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.579 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.579 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:50.579 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.579 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.579 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.579 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.838 [2024-11-15 10:38:21.151118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3 ']' 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.838 [2024-11-15 10:38:21.202797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.838 [2024-11-15 10:38:21.202834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.838 [2024-11-15 10:38:21.202960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.838 [2024-11-15 10:38:21.203070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.838 [2024-11-15 10:38:21.203105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.838 [2024-11-15 10:38:21.342847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:50.838 [2024-11-15 10:38:21.345160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:50.838 [2024-11-15 10:38:21.345254] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:50.838 [2024-11-15 10:38:21.345332] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:50.838 [2024-11-15 10:38:21.345375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.838 [2024-11-15 10:38:21.345393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:50.838 request: 00:09:50.838 { 00:09:50.838 "name": "raid_bdev1", 00:09:50.838 "raid_level": "concat", 00:09:50.838 "base_bdevs": [ 00:09:50.838 "malloc1", 00:09:50.838 "malloc2" 00:09:50.838 ], 00:09:50.838 "strip_size_kb": 64, 00:09:50.838 "superblock": false, 00:09:50.838 "method": "bdev_raid_create", 00:09:50.838 "req_id": 1 00:09:50.838 } 00:09:50.838 Got JSON-RPC error response 00:09:50.838 response: 00:09:50.838 { 00:09:50.838 "code": -17, 00:09:50.838 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:50.838 } 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.838 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.098 [2024-11-15 10:38:21.414848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:51.098 [2024-11-15 10:38:21.415061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.098 [2024-11-15 10:38:21.415249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:51.098 [2024-11-15 10:38:21.415425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.098 [2024-11-15 10:38:21.418196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.098 [2024-11-15 10:38:21.418372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:51.098 [2024-11-15 10:38:21.418602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:51.098 [2024-11-15 10:38:21.418793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:51.098 pt1 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.098 "name": "raid_bdev1", 00:09:51.098 "uuid": "f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3", 00:09:51.098 "strip_size_kb": 64, 00:09:51.098 "state": "configuring", 00:09:51.098 "raid_level": "concat", 00:09:51.098 "superblock": true, 00:09:51.098 "num_base_bdevs": 2, 00:09:51.098 "num_base_bdevs_discovered": 1, 00:09:51.098 "num_base_bdevs_operational": 2, 00:09:51.098 "base_bdevs_list": [ 00:09:51.098 { 00:09:51.098 "name": "pt1", 00:09:51.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.098 "is_configured": true, 00:09:51.098 "data_offset": 2048, 00:09:51.098 "data_size": 63488 00:09:51.098 }, 00:09:51.098 { 00:09:51.098 "name": null, 00:09:51.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.098 "is_configured": false, 00:09:51.098 "data_offset": 2048, 00:09:51.098 "data_size": 63488 00:09:51.098 } 00:09:51.098 ] 00:09:51.098 }' 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.098 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.666 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:51.666 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:51.666 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.667 [2024-11-15 10:38:21.939287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:51.667 [2024-11-15 10:38:21.939389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.667 [2024-11-15 10:38:21.939424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:51.667 [2024-11-15 10:38:21.939441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.667 [2024-11-15 10:38:21.939984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.667 [2024-11-15 10:38:21.940032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:51.667 [2024-11-15 10:38:21.940132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:51.667 [2024-11-15 10:38:21.940169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:51.667 [2024-11-15 10:38:21.940309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:51.667 [2024-11-15 10:38:21.940329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:51.667 [2024-11-15 10:38:21.940645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:51.667 [2024-11-15 10:38:21.940986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:51.667 [2024-11-15 10:38:21.941011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:51.667 [2024-11-15 10:38:21.941184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.667 pt2 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.667 "name": "raid_bdev1", 00:09:51.667 "uuid": "f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3", 00:09:51.667 "strip_size_kb": 64, 00:09:51.667 "state": "online", 00:09:51.667 "raid_level": "concat", 00:09:51.667 "superblock": true, 00:09:51.667 "num_base_bdevs": 2, 00:09:51.667 "num_base_bdevs_discovered": 2, 00:09:51.667 "num_base_bdevs_operational": 2, 00:09:51.667 "base_bdevs_list": [ 00:09:51.667 { 00:09:51.667 "name": "pt1", 00:09:51.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.667 "is_configured": true, 00:09:51.667 "data_offset": 2048, 00:09:51.667 "data_size": 63488 00:09:51.667 }, 00:09:51.667 { 00:09:51.667 "name": "pt2", 00:09:51.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.667 "is_configured": true, 00:09:51.667 "data_offset": 2048, 00:09:51.667 "data_size": 63488 00:09:51.667 } 00:09:51.667 ] 00:09:51.667 }' 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.667 10:38:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.926 [2024-11-15 10:38:22.391707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.926 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.926 "name": "raid_bdev1", 00:09:51.926 "aliases": [ 00:09:51.926 "f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3" 00:09:51.926 ], 00:09:51.926 "product_name": "Raid Volume", 00:09:51.926 "block_size": 512, 00:09:51.926 "num_blocks": 126976, 00:09:51.926 "uuid": "f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3", 00:09:51.926 "assigned_rate_limits": { 00:09:51.926 "rw_ios_per_sec": 0, 00:09:51.926 "rw_mbytes_per_sec": 0, 00:09:51.926 "r_mbytes_per_sec": 0, 00:09:51.926 "w_mbytes_per_sec": 0 00:09:51.926 }, 00:09:51.926 "claimed": false, 00:09:51.926 "zoned": false, 00:09:51.926 "supported_io_types": { 00:09:51.926 "read": true, 00:09:51.926 "write": true, 00:09:51.926 "unmap": true, 00:09:51.926 "flush": true, 00:09:51.926 "reset": true, 00:09:51.926 "nvme_admin": false, 00:09:51.926 "nvme_io": false, 00:09:51.926 "nvme_io_md": false, 00:09:51.926 "write_zeroes": true, 00:09:51.926 "zcopy": false, 00:09:51.926 "get_zone_info": false, 00:09:51.926 "zone_management": false, 00:09:51.926 "zone_append": false, 00:09:51.926 "compare": false, 00:09:51.926 "compare_and_write": false, 00:09:51.926 "abort": false, 00:09:51.926 "seek_hole": false, 00:09:51.926 "seek_data": false, 00:09:51.926 "copy": false, 00:09:51.926 "nvme_iov_md": false 00:09:51.926 }, 00:09:51.927 "memory_domains": [ 00:09:51.927 { 00:09:51.927 "dma_device_id": "system", 00:09:51.927 "dma_device_type": 1 00:09:51.927 }, 00:09:51.927 { 00:09:51.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.927 "dma_device_type": 2 00:09:51.927 }, 00:09:51.927 { 00:09:51.927 "dma_device_id": "system", 00:09:51.927 "dma_device_type": 1 00:09:51.927 }, 00:09:51.927 { 00:09:51.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.927 "dma_device_type": 2 00:09:51.927 } 00:09:51.927 ], 00:09:51.927 "driver_specific": { 00:09:51.927 "raid": { 00:09:51.927 "uuid": "f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3", 00:09:51.927 "strip_size_kb": 64, 00:09:51.927 "state": "online", 00:09:51.927 "raid_level": "concat", 00:09:51.927 "superblock": true, 00:09:51.927 "num_base_bdevs": 2, 00:09:51.927 "num_base_bdevs_discovered": 2, 00:09:51.927 "num_base_bdevs_operational": 2, 00:09:51.927 "base_bdevs_list": [ 00:09:51.927 { 00:09:51.927 "name": "pt1", 00:09:51.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.927 "is_configured": true, 00:09:51.927 "data_offset": 2048, 00:09:51.927 "data_size": 63488 00:09:51.927 }, 00:09:51.927 { 00:09:51.927 "name": "pt2", 00:09:51.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.927 "is_configured": true, 00:09:51.927 "data_offset": 2048, 00:09:51.927 "data_size": 63488 00:09:51.927 } 00:09:51.927 ] 00:09:51.927 } 00:09:51.927 } 00:09:51.927 }' 00:09:51.927 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.927 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:51.927 pt2' 00:09:51.927 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.186 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:52.186 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.186 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:52.186 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.186 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.187 [2024-11-15 10:38:22.631755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3 '!=' f9bc40d4-c9f1-4b05-9914-fcd90bf9dcd3 ']' 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62386 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62386 ']' 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62386 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62386 00:09:52.187 killing process with pid 62386 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62386' 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62386 00:09:52.187 [2024-11-15 10:38:22.703816] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.187 10:38:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62386 00:09:52.187 [2024-11-15 10:38:22.703927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.187 [2024-11-15 10:38:22.703994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.187 [2024-11-15 10:38:22.704017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:52.445 [2024-11-15 10:38:22.878732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.379 10:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:53.379 00:09:53.379 real 0m4.783s 00:09:53.379 user 0m7.160s 00:09:53.379 sys 0m0.589s 00:09:53.379 10:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.379 10:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.379 ************************************ 00:09:53.379 END TEST raid_superblock_test 00:09:53.379 ************************************ 00:09:53.379 10:38:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:53.379 10:38:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:53.379 10:38:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.379 10:38:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.379 ************************************ 00:09:53.379 START TEST raid_read_error_test 00:09:53.379 ************************************ 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AewUkubH52 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62599 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62599 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62599 ']' 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:53.638 10:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.638 [2024-11-15 10:38:24.050010] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:53.638 [2024-11-15 10:38:24.050182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62599 ] 00:09:53.897 [2024-11-15 10:38:24.234480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.897 [2024-11-15 10:38:24.362747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.155 [2024-11-15 10:38:24.568142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.155 [2024-11-15 10:38:24.568198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.720 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:54.720 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:54.720 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.720 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:54.720 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.720 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.720 BaseBdev1_malloc 00:09:54.720 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.721 true 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.721 [2024-11-15 10:38:25.159268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:54.721 [2024-11-15 10:38:25.159362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.721 [2024-11-15 10:38:25.159407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:54.721 [2024-11-15 10:38:25.159427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.721 [2024-11-15 10:38:25.162180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.721 [2024-11-15 10:38:25.162408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:54.721 BaseBdev1 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.721 BaseBdev2_malloc 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.721 true 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.721 [2024-11-15 10:38:25.215287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:54.721 [2024-11-15 10:38:25.215376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.721 [2024-11-15 10:38:25.215405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:54.721 [2024-11-15 10:38:25.215422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.721 [2024-11-15 10:38:25.218034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.721 [2024-11-15 10:38:25.218230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:54.721 BaseBdev2 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.721 [2024-11-15 10:38:25.223487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.721 [2024-11-15 10:38:25.226664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.721 [2024-11-15 10:38:25.227048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:54.721 [2024-11-15 10:38:25.227082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:54.721 [2024-11-15 10:38:25.227564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:54.721 [2024-11-15 10:38:25.227895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:54.721 [2024-11-15 10:38:25.227932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:54.721 [2024-11-15 10:38:25.228299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.721 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.979 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.979 "name": "raid_bdev1", 00:09:54.979 "uuid": "0634f748-6c22-417b-9dc3-8698df2dc453", 00:09:54.979 "strip_size_kb": 64, 00:09:54.979 "state": "online", 00:09:54.979 "raid_level": "concat", 00:09:54.979 "superblock": true, 00:09:54.979 "num_base_bdevs": 2, 00:09:54.979 "num_base_bdevs_discovered": 2, 00:09:54.979 "num_base_bdevs_operational": 2, 00:09:54.979 "base_bdevs_list": [ 00:09:54.979 { 00:09:54.979 "name": "BaseBdev1", 00:09:54.979 "uuid": "6daf98de-4a69-5eda-a285-eaeb33ababb6", 00:09:54.979 "is_configured": true, 00:09:54.979 "data_offset": 2048, 00:09:54.979 "data_size": 63488 00:09:54.979 }, 00:09:54.979 { 00:09:54.979 "name": "BaseBdev2", 00:09:54.979 "uuid": "6a3a0628-d175-50d9-9dd8-378cf92e646f", 00:09:54.979 "is_configured": true, 00:09:54.979 "data_offset": 2048, 00:09:54.979 "data_size": 63488 00:09:54.979 } 00:09:54.979 ] 00:09:54.979 }' 00:09:54.979 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.979 10:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.237 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:55.237 10:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:55.495 [2024-11-15 10:38:25.857631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.428 "name": "raid_bdev1", 00:09:56.428 "uuid": "0634f748-6c22-417b-9dc3-8698df2dc453", 00:09:56.428 "strip_size_kb": 64, 00:09:56.428 "state": "online", 00:09:56.428 "raid_level": "concat", 00:09:56.428 "superblock": true, 00:09:56.428 "num_base_bdevs": 2, 00:09:56.428 "num_base_bdevs_discovered": 2, 00:09:56.428 "num_base_bdevs_operational": 2, 00:09:56.428 "base_bdevs_list": [ 00:09:56.428 { 00:09:56.428 "name": "BaseBdev1", 00:09:56.428 "uuid": "6daf98de-4a69-5eda-a285-eaeb33ababb6", 00:09:56.428 "is_configured": true, 00:09:56.428 "data_offset": 2048, 00:09:56.428 "data_size": 63488 00:09:56.428 }, 00:09:56.428 { 00:09:56.428 "name": "BaseBdev2", 00:09:56.428 "uuid": "6a3a0628-d175-50d9-9dd8-378cf92e646f", 00:09:56.428 "is_configured": true, 00:09:56.428 "data_offset": 2048, 00:09:56.428 "data_size": 63488 00:09:56.428 } 00:09:56.428 ] 00:09:56.428 }' 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.428 10:38:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.686 10:38:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:56.686 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.686 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.686 [2024-11-15 10:38:27.230941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.686 [2024-11-15 10:38:27.231139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.686 [2024-11-15 10:38:27.234822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.686 [2024-11-15 10:38:27.235070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.686 [2024-11-15 10:38:27.235272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.686 [2024-11-15 10:38:27.235473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:09:56.686 "results": [ 00:09:56.686 { 00:09:56.686 "job": "raid_bdev1", 00:09:56.686 "core_mask": "0x1", 00:09:56.686 "workload": "randrw", 00:09:56.686 "percentage": 50, 00:09:56.687 "status": "finished", 00:09:56.687 "queue_depth": 1, 00:09:56.687 "io_size": 131072, 00:09:56.687 "runtime": 1.371514, 00:09:56.687 "iops": 11156.284223128601, 00:09:56.687 "mibps": 1394.5355278910752, 00:09:56.687 "io_failed": 1, 00:09:56.687 "io_timeout": 0, 00:09:56.687 "avg_latency_us": 123.23210988462591, 00:09:56.687 "min_latency_us": 43.054545454545455, 00:09:56.687 "max_latency_us": 1876.7127272727273 00:09:56.687 } 00:09:56.687 ], 00:09:56.687 "core_count": 1 00:09:56.687 } 00:09:56.687 te offline 00:09:56.687 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.687 10:38:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62599 00:09:56.687 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62599 ']' 00:09:56.687 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62599 00:09:56.687 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:56.945 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:56.945 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62599 00:09:56.945 killing process with pid 62599 00:09:56.945 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:56.945 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:56.945 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62599' 00:09:56.945 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62599 00:09:56.945 [2024-11-15 10:38:27.271701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.945 10:38:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62599 00:09:56.945 [2024-11-15 10:38:27.386449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.884 10:38:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AewUkubH52 00:09:57.884 10:38:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.884 10:38:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.884 10:38:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:57.884 10:38:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:57.884 10:38:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.884 10:38:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.884 10:38:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:57.884 00:09:57.884 real 0m4.502s 00:09:57.884 user 0m5.781s 00:09:57.884 sys 0m0.449s 00:09:57.884 10:38:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:57.884 ************************************ 00:09:57.884 END TEST raid_read_error_test 00:09:57.884 ************************************ 00:09:57.884 10:38:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.142 10:38:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:58.142 10:38:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:58.142 10:38:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.142 10:38:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.142 ************************************ 00:09:58.142 START TEST raid_write_error_test 00:09:58.142 ************************************ 00:09:58.142 10:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zzEdMC3wyT 00:09:58.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62747 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62747 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62747 ']' 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:58.143 10:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.143 [2024-11-15 10:38:28.605739] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:09:58.143 [2024-11-15 10:38:28.606154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62747 ] 00:09:58.401 [2024-11-15 10:38:28.790220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.401 [2024-11-15 10:38:28.914856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.659 [2024-11-15 10:38:29.125657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.659 [2024-11-15 10:38:29.125729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.228 BaseBdev1_malloc 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.228 true 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.228 [2024-11-15 10:38:29.659479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:59.228 [2024-11-15 10:38:29.659550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.228 [2024-11-15 10:38:29.659580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:59.228 [2024-11-15 10:38:29.659597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.228 [2024-11-15 10:38:29.662238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.228 [2024-11-15 10:38:29.662291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:59.228 BaseBdev1 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.228 BaseBdev2_malloc 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.228 true 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.228 [2024-11-15 10:38:29.711137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:59.228 [2024-11-15 10:38:29.711216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.228 [2024-11-15 10:38:29.711242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:59.228 [2024-11-15 10:38:29.711258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.228 [2024-11-15 10:38:29.713880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.228 [2024-11-15 10:38:29.713932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:59.228 BaseBdev2 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.228 [2024-11-15 10:38:29.719213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.228 [2024-11-15 10:38:29.721594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.228 [2024-11-15 10:38:29.721864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:59.228 [2024-11-15 10:38:29.721889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:59.228 [2024-11-15 10:38:29.722187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:59.228 [2024-11-15 10:38:29.722439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:59.228 [2024-11-15 10:38:29.722462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:59.228 [2024-11-15 10:38:29.722657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.228 "name": "raid_bdev1", 00:09:59.228 "uuid": "44f4a1ec-5385-4e9a-a082-a2c901625ffd", 00:09:59.228 "strip_size_kb": 64, 00:09:59.228 "state": "online", 00:09:59.228 "raid_level": "concat", 00:09:59.228 "superblock": true, 00:09:59.228 "num_base_bdevs": 2, 00:09:59.228 "num_base_bdevs_discovered": 2, 00:09:59.228 "num_base_bdevs_operational": 2, 00:09:59.228 "base_bdevs_list": [ 00:09:59.228 { 00:09:59.228 "name": "BaseBdev1", 00:09:59.228 "uuid": "0dad3937-265e-5a5e-bfef-cae210be98d8", 00:09:59.228 "is_configured": true, 00:09:59.228 "data_offset": 2048, 00:09:59.228 "data_size": 63488 00:09:59.228 }, 00:09:59.228 { 00:09:59.228 "name": "BaseBdev2", 00:09:59.228 "uuid": "c4118279-a22b-5996-a953-f40702f64211", 00:09:59.228 "is_configured": true, 00:09:59.228 "data_offset": 2048, 00:09:59.228 "data_size": 63488 00:09:59.228 } 00:09:59.228 ] 00:09:59.228 }' 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.228 10:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.800 10:38:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:59.800 10:38:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:59.800 [2024-11-15 10:38:30.352678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.733 "name": "raid_bdev1", 00:10:00.733 "uuid": "44f4a1ec-5385-4e9a-a082-a2c901625ffd", 00:10:00.733 "strip_size_kb": 64, 00:10:00.733 "state": "online", 00:10:00.733 "raid_level": "concat", 00:10:00.733 "superblock": true, 00:10:00.733 "num_base_bdevs": 2, 00:10:00.733 "num_base_bdevs_discovered": 2, 00:10:00.733 "num_base_bdevs_operational": 2, 00:10:00.733 "base_bdevs_list": [ 00:10:00.733 { 00:10:00.733 "name": "BaseBdev1", 00:10:00.733 "uuid": "0dad3937-265e-5a5e-bfef-cae210be98d8", 00:10:00.733 "is_configured": true, 00:10:00.733 "data_offset": 2048, 00:10:00.733 "data_size": 63488 00:10:00.733 }, 00:10:00.733 { 00:10:00.733 "name": "BaseBdev2", 00:10:00.733 "uuid": "c4118279-a22b-5996-a953-f40702f64211", 00:10:00.733 "is_configured": true, 00:10:00.733 "data_offset": 2048, 00:10:00.733 "data_size": 63488 00:10:00.733 } 00:10:00.733 ] 00:10:00.733 }' 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.733 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.298 [2024-11-15 10:38:31.804256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.298 [2024-11-15 10:38:31.804300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.298 [2024-11-15 10:38:31.807765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.298 [2024-11-15 10:38:31.807823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.298 [2024-11-15 10:38:31.807866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.298 [2024-11-15 10:38:31.807884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:01.298 { 00:10:01.298 "results": [ 00:10:01.298 { 00:10:01.298 "job": "raid_bdev1", 00:10:01.298 "core_mask": "0x1", 00:10:01.298 "workload": "randrw", 00:10:01.298 "percentage": 50, 00:10:01.298 "status": "finished", 00:10:01.298 "queue_depth": 1, 00:10:01.298 "io_size": 131072, 00:10:01.298 "runtime": 1.449545, 00:10:01.298 "iops": 11474.635144131435, 00:10:01.298 "mibps": 1434.3293930164293, 00:10:01.298 "io_failed": 1, 00:10:01.298 "io_timeout": 0, 00:10:01.298 "avg_latency_us": 119.46322253434913, 00:10:01.298 "min_latency_us": 42.82181818181818, 00:10:01.298 "max_latency_us": 1884.16 00:10:01.298 } 00:10:01.298 ], 00:10:01.298 "core_count": 1 00:10:01.298 } 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62747 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62747 ']' 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62747 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62747 00:10:01.298 killing process with pid 62747 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62747' 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62747 00:10:01.298 10:38:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62747 00:10:01.298 [2024-11-15 10:38:31.844823] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.556 [2024-11-15 10:38:31.958057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.491 10:38:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zzEdMC3wyT 00:10:02.491 10:38:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:02.491 10:38:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:02.491 10:38:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:10:02.491 10:38:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:02.491 10:38:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.491 10:38:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:02.491 10:38:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:10:02.491 00:10:02.491 real 0m4.499s 00:10:02.491 user 0m5.801s 00:10:02.491 sys 0m0.461s 00:10:02.491 10:38:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.491 ************************************ 00:10:02.491 END TEST raid_write_error_test 00:10:02.491 ************************************ 00:10:02.491 10:38:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.491 10:38:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:02.491 10:38:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:02.491 10:38:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:02.491 10:38:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.491 10:38:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.491 ************************************ 00:10:02.491 START TEST raid_state_function_test 00:10:02.491 ************************************ 00:10:02.491 10:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:02.492 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:02.750 Process raid pid: 62890 00:10:02.750 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:02.750 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:02.750 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62890 00:10:02.750 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:02.750 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62890' 00:10:02.750 10:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62890 00:10:02.750 10:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62890 ']' 00:10:02.751 10:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.751 10:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:02.751 10:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.751 10:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:02.751 10:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.751 [2024-11-15 10:38:33.147316] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:10:02.751 [2024-11-15 10:38:33.147709] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.009 [2024-11-15 10:38:33.335511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.009 [2024-11-15 10:38:33.461642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.268 [2024-11-15 10:38:33.657326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.268 [2024-11-15 10:38:33.657390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.835 [2024-11-15 10:38:34.178995] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.835 [2024-11-15 10:38:34.179293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.835 [2024-11-15 10:38:34.179323] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.835 [2024-11-15 10:38:34.179365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.835 "name": "Existed_Raid", 00:10:03.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.835 "strip_size_kb": 0, 00:10:03.835 "state": "configuring", 00:10:03.835 "raid_level": "raid1", 00:10:03.835 "superblock": false, 00:10:03.835 "num_base_bdevs": 2, 00:10:03.835 "num_base_bdevs_discovered": 0, 00:10:03.835 "num_base_bdevs_operational": 2, 00:10:03.835 "base_bdevs_list": [ 00:10:03.835 { 00:10:03.835 "name": "BaseBdev1", 00:10:03.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.835 "is_configured": false, 00:10:03.835 "data_offset": 0, 00:10:03.835 "data_size": 0 00:10:03.835 }, 00:10:03.835 { 00:10:03.835 "name": "BaseBdev2", 00:10:03.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.835 "is_configured": false, 00:10:03.835 "data_offset": 0, 00:10:03.835 "data_size": 0 00:10:03.835 } 00:10:03.835 ] 00:10:03.835 }' 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.835 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.399 [2024-11-15 10:38:34.687053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.399 [2024-11-15 10:38:34.687094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.399 [2024-11-15 10:38:34.695011] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.399 [2024-11-15 10:38:34.695222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.399 [2024-11-15 10:38:34.695367] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.399 [2024-11-15 10:38:34.695496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.399 [2024-11-15 10:38:34.736355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.399 BaseBdev1 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.399 [ 00:10:04.399 { 00:10:04.399 "name": "BaseBdev1", 00:10:04.399 "aliases": [ 00:10:04.399 "6e46ba0d-d175-4e1d-be45-c351191a3af5" 00:10:04.399 ], 00:10:04.399 "product_name": "Malloc disk", 00:10:04.399 "block_size": 512, 00:10:04.399 "num_blocks": 65536, 00:10:04.399 "uuid": "6e46ba0d-d175-4e1d-be45-c351191a3af5", 00:10:04.399 "assigned_rate_limits": { 00:10:04.399 "rw_ios_per_sec": 0, 00:10:04.399 "rw_mbytes_per_sec": 0, 00:10:04.399 "r_mbytes_per_sec": 0, 00:10:04.399 "w_mbytes_per_sec": 0 00:10:04.399 }, 00:10:04.399 "claimed": true, 00:10:04.399 "claim_type": "exclusive_write", 00:10:04.399 "zoned": false, 00:10:04.399 "supported_io_types": { 00:10:04.399 "read": true, 00:10:04.399 "write": true, 00:10:04.399 "unmap": true, 00:10:04.399 "flush": true, 00:10:04.399 "reset": true, 00:10:04.399 "nvme_admin": false, 00:10:04.399 "nvme_io": false, 00:10:04.399 "nvme_io_md": false, 00:10:04.399 "write_zeroes": true, 00:10:04.399 "zcopy": true, 00:10:04.399 "get_zone_info": false, 00:10:04.399 "zone_management": false, 00:10:04.399 "zone_append": false, 00:10:04.399 "compare": false, 00:10:04.399 "compare_and_write": false, 00:10:04.399 "abort": true, 00:10:04.399 "seek_hole": false, 00:10:04.399 "seek_data": false, 00:10:04.399 "copy": true, 00:10:04.399 "nvme_iov_md": false 00:10:04.399 }, 00:10:04.399 "memory_domains": [ 00:10:04.399 { 00:10:04.399 "dma_device_id": "system", 00:10:04.399 "dma_device_type": 1 00:10:04.399 }, 00:10:04.399 { 00:10:04.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.399 "dma_device_type": 2 00:10:04.399 } 00:10:04.399 ], 00:10:04.399 "driver_specific": {} 00:10:04.399 } 00:10:04.399 ] 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.399 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.400 "name": "Existed_Raid", 00:10:04.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.400 "strip_size_kb": 0, 00:10:04.400 "state": "configuring", 00:10:04.400 "raid_level": "raid1", 00:10:04.400 "superblock": false, 00:10:04.400 "num_base_bdevs": 2, 00:10:04.400 "num_base_bdevs_discovered": 1, 00:10:04.400 "num_base_bdevs_operational": 2, 00:10:04.400 "base_bdevs_list": [ 00:10:04.400 { 00:10:04.400 "name": "BaseBdev1", 00:10:04.400 "uuid": "6e46ba0d-d175-4e1d-be45-c351191a3af5", 00:10:04.400 "is_configured": true, 00:10:04.400 "data_offset": 0, 00:10:04.400 "data_size": 65536 00:10:04.400 }, 00:10:04.400 { 00:10:04.400 "name": "BaseBdev2", 00:10:04.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.400 "is_configured": false, 00:10:04.400 "data_offset": 0, 00:10:04.400 "data_size": 0 00:10:04.400 } 00:10:04.400 ] 00:10:04.400 }' 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.400 10:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.965 [2024-11-15 10:38:35.276572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.965 [2024-11-15 10:38:35.276804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.965 [2024-11-15 10:38:35.284583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.965 [2024-11-15 10:38:35.286967] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.965 [2024-11-15 10:38:35.287157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.965 "name": "Existed_Raid", 00:10:04.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.965 "strip_size_kb": 0, 00:10:04.965 "state": "configuring", 00:10:04.965 "raid_level": "raid1", 00:10:04.965 "superblock": false, 00:10:04.965 "num_base_bdevs": 2, 00:10:04.965 "num_base_bdevs_discovered": 1, 00:10:04.965 "num_base_bdevs_operational": 2, 00:10:04.965 "base_bdevs_list": [ 00:10:04.965 { 00:10:04.965 "name": "BaseBdev1", 00:10:04.965 "uuid": "6e46ba0d-d175-4e1d-be45-c351191a3af5", 00:10:04.965 "is_configured": true, 00:10:04.965 "data_offset": 0, 00:10:04.965 "data_size": 65536 00:10:04.965 }, 00:10:04.965 { 00:10:04.965 "name": "BaseBdev2", 00:10:04.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.965 "is_configured": false, 00:10:04.965 "data_offset": 0, 00:10:04.965 "data_size": 0 00:10:04.965 } 00:10:04.965 ] 00:10:04.965 }' 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.965 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.531 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.531 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.531 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.531 [2024-11-15 10:38:35.878429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.531 [2024-11-15 10:38:35.878672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:05.531 [2024-11-15 10:38:35.878697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:05.532 [2024-11-15 10:38:35.879026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:05.532 [2024-11-15 10:38:35.879256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:05.532 [2024-11-15 10:38:35.879281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:05.532 BaseBdev2 00:10:05.532 [2024-11-15 10:38:35.879615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.532 [ 00:10:05.532 { 00:10:05.532 "name": "BaseBdev2", 00:10:05.532 "aliases": [ 00:10:05.532 "bf772452-40fd-4277-bbfe-c5f378f0e9ec" 00:10:05.532 ], 00:10:05.532 "product_name": "Malloc disk", 00:10:05.532 "block_size": 512, 00:10:05.532 "num_blocks": 65536, 00:10:05.532 "uuid": "bf772452-40fd-4277-bbfe-c5f378f0e9ec", 00:10:05.532 "assigned_rate_limits": { 00:10:05.532 "rw_ios_per_sec": 0, 00:10:05.532 "rw_mbytes_per_sec": 0, 00:10:05.532 "r_mbytes_per_sec": 0, 00:10:05.532 "w_mbytes_per_sec": 0 00:10:05.532 }, 00:10:05.532 "claimed": true, 00:10:05.532 "claim_type": "exclusive_write", 00:10:05.532 "zoned": false, 00:10:05.532 "supported_io_types": { 00:10:05.532 "read": true, 00:10:05.532 "write": true, 00:10:05.532 "unmap": true, 00:10:05.532 "flush": true, 00:10:05.532 "reset": true, 00:10:05.532 "nvme_admin": false, 00:10:05.532 "nvme_io": false, 00:10:05.532 "nvme_io_md": false, 00:10:05.532 "write_zeroes": true, 00:10:05.532 "zcopy": true, 00:10:05.532 "get_zone_info": false, 00:10:05.532 "zone_management": false, 00:10:05.532 "zone_append": false, 00:10:05.532 "compare": false, 00:10:05.532 "compare_and_write": false, 00:10:05.532 "abort": true, 00:10:05.532 "seek_hole": false, 00:10:05.532 "seek_data": false, 00:10:05.532 "copy": true, 00:10:05.532 "nvme_iov_md": false 00:10:05.532 }, 00:10:05.532 "memory_domains": [ 00:10:05.532 { 00:10:05.532 "dma_device_id": "system", 00:10:05.532 "dma_device_type": 1 00:10:05.532 }, 00:10:05.532 { 00:10:05.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.532 "dma_device_type": 2 00:10:05.532 } 00:10:05.532 ], 00:10:05.532 "driver_specific": {} 00:10:05.532 } 00:10:05.532 ] 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.532 "name": "Existed_Raid", 00:10:05.532 "uuid": "febc4feb-a141-4e8d-800c-13397a0b9dae", 00:10:05.532 "strip_size_kb": 0, 00:10:05.532 "state": "online", 00:10:05.532 "raid_level": "raid1", 00:10:05.532 "superblock": false, 00:10:05.532 "num_base_bdevs": 2, 00:10:05.532 "num_base_bdevs_discovered": 2, 00:10:05.532 "num_base_bdevs_operational": 2, 00:10:05.532 "base_bdevs_list": [ 00:10:05.532 { 00:10:05.532 "name": "BaseBdev1", 00:10:05.532 "uuid": "6e46ba0d-d175-4e1d-be45-c351191a3af5", 00:10:05.532 "is_configured": true, 00:10:05.532 "data_offset": 0, 00:10:05.532 "data_size": 65536 00:10:05.532 }, 00:10:05.532 { 00:10:05.532 "name": "BaseBdev2", 00:10:05.532 "uuid": "bf772452-40fd-4277-bbfe-c5f378f0e9ec", 00:10:05.532 "is_configured": true, 00:10:05.532 "data_offset": 0, 00:10:05.532 "data_size": 65536 00:10:05.532 } 00:10:05.532 ] 00:10:05.532 }' 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.532 10:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.098 [2024-11-15 10:38:36.439001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.098 "name": "Existed_Raid", 00:10:06.098 "aliases": [ 00:10:06.098 "febc4feb-a141-4e8d-800c-13397a0b9dae" 00:10:06.098 ], 00:10:06.098 "product_name": "Raid Volume", 00:10:06.098 "block_size": 512, 00:10:06.098 "num_blocks": 65536, 00:10:06.098 "uuid": "febc4feb-a141-4e8d-800c-13397a0b9dae", 00:10:06.098 "assigned_rate_limits": { 00:10:06.098 "rw_ios_per_sec": 0, 00:10:06.098 "rw_mbytes_per_sec": 0, 00:10:06.098 "r_mbytes_per_sec": 0, 00:10:06.098 "w_mbytes_per_sec": 0 00:10:06.098 }, 00:10:06.098 "claimed": false, 00:10:06.098 "zoned": false, 00:10:06.098 "supported_io_types": { 00:10:06.098 "read": true, 00:10:06.098 "write": true, 00:10:06.098 "unmap": false, 00:10:06.098 "flush": false, 00:10:06.098 "reset": true, 00:10:06.098 "nvme_admin": false, 00:10:06.098 "nvme_io": false, 00:10:06.098 "nvme_io_md": false, 00:10:06.098 "write_zeroes": true, 00:10:06.098 "zcopy": false, 00:10:06.098 "get_zone_info": false, 00:10:06.098 "zone_management": false, 00:10:06.098 "zone_append": false, 00:10:06.098 "compare": false, 00:10:06.098 "compare_and_write": false, 00:10:06.098 "abort": false, 00:10:06.098 "seek_hole": false, 00:10:06.098 "seek_data": false, 00:10:06.098 "copy": false, 00:10:06.098 "nvme_iov_md": false 00:10:06.098 }, 00:10:06.098 "memory_domains": [ 00:10:06.098 { 00:10:06.098 "dma_device_id": "system", 00:10:06.098 "dma_device_type": 1 00:10:06.098 }, 00:10:06.098 { 00:10:06.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.098 "dma_device_type": 2 00:10:06.098 }, 00:10:06.098 { 00:10:06.098 "dma_device_id": "system", 00:10:06.098 "dma_device_type": 1 00:10:06.098 }, 00:10:06.098 { 00:10:06.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.098 "dma_device_type": 2 00:10:06.098 } 00:10:06.098 ], 00:10:06.098 "driver_specific": { 00:10:06.098 "raid": { 00:10:06.098 "uuid": "febc4feb-a141-4e8d-800c-13397a0b9dae", 00:10:06.098 "strip_size_kb": 0, 00:10:06.098 "state": "online", 00:10:06.098 "raid_level": "raid1", 00:10:06.098 "superblock": false, 00:10:06.098 "num_base_bdevs": 2, 00:10:06.098 "num_base_bdevs_discovered": 2, 00:10:06.098 "num_base_bdevs_operational": 2, 00:10:06.098 "base_bdevs_list": [ 00:10:06.098 { 00:10:06.098 "name": "BaseBdev1", 00:10:06.098 "uuid": "6e46ba0d-d175-4e1d-be45-c351191a3af5", 00:10:06.098 "is_configured": true, 00:10:06.098 "data_offset": 0, 00:10:06.098 "data_size": 65536 00:10:06.098 }, 00:10:06.098 { 00:10:06.098 "name": "BaseBdev2", 00:10:06.098 "uuid": "bf772452-40fd-4277-bbfe-c5f378f0e9ec", 00:10:06.098 "is_configured": true, 00:10:06.098 "data_offset": 0, 00:10:06.098 "data_size": 65536 00:10:06.098 } 00:10:06.098 ] 00:10:06.098 } 00:10:06.098 } 00:10:06.098 }' 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:06.098 BaseBdev2' 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.098 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.357 [2024-11-15 10:38:36.698747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.357 "name": "Existed_Raid", 00:10:06.357 "uuid": "febc4feb-a141-4e8d-800c-13397a0b9dae", 00:10:06.357 "strip_size_kb": 0, 00:10:06.357 "state": "online", 00:10:06.357 "raid_level": "raid1", 00:10:06.357 "superblock": false, 00:10:06.357 "num_base_bdevs": 2, 00:10:06.357 "num_base_bdevs_discovered": 1, 00:10:06.357 "num_base_bdevs_operational": 1, 00:10:06.357 "base_bdevs_list": [ 00:10:06.357 { 00:10:06.357 "name": null, 00:10:06.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.357 "is_configured": false, 00:10:06.357 "data_offset": 0, 00:10:06.357 "data_size": 65536 00:10:06.357 }, 00:10:06.357 { 00:10:06.357 "name": "BaseBdev2", 00:10:06.357 "uuid": "bf772452-40fd-4277-bbfe-c5f378f0e9ec", 00:10:06.357 "is_configured": true, 00:10:06.357 "data_offset": 0, 00:10:06.357 "data_size": 65536 00:10:06.357 } 00:10:06.357 ] 00:10:06.357 }' 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.357 10:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.923 [2024-11-15 10:38:37.347684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.923 [2024-11-15 10:38:37.347948] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.923 [2024-11-15 10:38:37.428235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.923 [2024-11-15 10:38:37.428309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.923 [2024-11-15 10:38:37.428330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:06.923 10:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62890 00:10:07.182 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62890 ']' 00:10:07.182 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62890 00:10:07.182 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:07.182 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:07.182 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62890 00:10:07.182 killing process with pid 62890 00:10:07.182 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:07.182 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:07.182 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62890' 00:10:07.182 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62890 00:10:07.182 [2024-11-15 10:38:37.515267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.182 10:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62890 00:10:07.182 [2024-11-15 10:38:37.529979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:08.116 00:10:08.116 real 0m5.512s 00:10:08.116 user 0m8.462s 00:10:08.116 sys 0m0.683s 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.116 ************************************ 00:10:08.116 END TEST raid_state_function_test 00:10:08.116 ************************************ 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.116 10:38:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:08.116 10:38:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:08.116 10:38:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.116 10:38:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.116 ************************************ 00:10:08.116 START TEST raid_state_function_test_sb 00:10:08.116 ************************************ 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:08.116 Process raid pid: 63149 00:10:08.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63149 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63149' 00:10:08.116 10:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63149 00:10:08.117 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63149 ']' 00:10:08.117 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.117 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:08.117 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.117 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:08.117 10:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.374 [2024-11-15 10:38:38.716601] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:10:08.374 [2024-11-15 10:38:38.716950] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.374 [2024-11-15 10:38:38.893782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.632 [2024-11-15 10:38:39.019293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.890 [2024-11-15 10:38:39.240251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.890 [2024-11-15 10:38:39.240516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.457 [2024-11-15 10:38:39.741480] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.457 [2024-11-15 10:38:39.741701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.457 [2024-11-15 10:38:39.741868] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.457 [2024-11-15 10:38:39.741907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.457 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.457 "name": "Existed_Raid", 00:10:09.457 "uuid": "468050be-feed-4bc2-ab5a-c5eeeb4d5fc0", 00:10:09.457 "strip_size_kb": 0, 00:10:09.457 "state": "configuring", 00:10:09.457 "raid_level": "raid1", 00:10:09.457 "superblock": true, 00:10:09.457 "num_base_bdevs": 2, 00:10:09.457 "num_base_bdevs_discovered": 0, 00:10:09.457 "num_base_bdevs_operational": 2, 00:10:09.457 "base_bdevs_list": [ 00:10:09.458 { 00:10:09.458 "name": "BaseBdev1", 00:10:09.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.458 "is_configured": false, 00:10:09.458 "data_offset": 0, 00:10:09.458 "data_size": 0 00:10:09.458 }, 00:10:09.458 { 00:10:09.458 "name": "BaseBdev2", 00:10:09.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.458 "is_configured": false, 00:10:09.458 "data_offset": 0, 00:10:09.458 "data_size": 0 00:10:09.458 } 00:10:09.458 ] 00:10:09.458 }' 00:10:09.458 10:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.458 10:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.717 [2024-11-15 10:38:40.253537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.717 [2024-11-15 10:38:40.253727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.717 [2024-11-15 10:38:40.261522] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.717 [2024-11-15 10:38:40.261707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.717 [2024-11-15 10:38:40.261870] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.717 [2024-11-15 10:38:40.261948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.717 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.978 [2024-11-15 10:38:40.302099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.978 BaseBdev1 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.978 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.978 [ 00:10:09.978 { 00:10:09.978 "name": "BaseBdev1", 00:10:09.978 "aliases": [ 00:10:09.978 "3166a402-ca11-49db-bfcb-02c58f415ae1" 00:10:09.978 ], 00:10:09.978 "product_name": "Malloc disk", 00:10:09.978 "block_size": 512, 00:10:09.978 "num_blocks": 65536, 00:10:09.978 "uuid": "3166a402-ca11-49db-bfcb-02c58f415ae1", 00:10:09.978 "assigned_rate_limits": { 00:10:09.978 "rw_ios_per_sec": 0, 00:10:09.978 "rw_mbytes_per_sec": 0, 00:10:09.978 "r_mbytes_per_sec": 0, 00:10:09.978 "w_mbytes_per_sec": 0 00:10:09.978 }, 00:10:09.978 "claimed": true, 00:10:09.978 "claim_type": "exclusive_write", 00:10:09.978 "zoned": false, 00:10:09.978 "supported_io_types": { 00:10:09.978 "read": true, 00:10:09.978 "write": true, 00:10:09.978 "unmap": true, 00:10:09.978 "flush": true, 00:10:09.978 "reset": true, 00:10:09.978 "nvme_admin": false, 00:10:09.978 "nvme_io": false, 00:10:09.978 "nvme_io_md": false, 00:10:09.979 "write_zeroes": true, 00:10:09.979 "zcopy": true, 00:10:09.979 "get_zone_info": false, 00:10:09.979 "zone_management": false, 00:10:09.979 "zone_append": false, 00:10:09.979 "compare": false, 00:10:09.979 "compare_and_write": false, 00:10:09.979 "abort": true, 00:10:09.979 "seek_hole": false, 00:10:09.979 "seek_data": false, 00:10:09.979 "copy": true, 00:10:09.979 "nvme_iov_md": false 00:10:09.979 }, 00:10:09.979 "memory_domains": [ 00:10:09.979 { 00:10:09.979 "dma_device_id": "system", 00:10:09.979 "dma_device_type": 1 00:10:09.979 }, 00:10:09.979 { 00:10:09.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.979 "dma_device_type": 2 00:10:09.979 } 00:10:09.979 ], 00:10:09.979 "driver_specific": {} 00:10:09.979 } 00:10:09.979 ] 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.979 "name": "Existed_Raid", 00:10:09.979 "uuid": "d5d0a2c7-0573-4c6e-8ae6-135b3f92c8d8", 00:10:09.979 "strip_size_kb": 0, 00:10:09.979 "state": "configuring", 00:10:09.979 "raid_level": "raid1", 00:10:09.979 "superblock": true, 00:10:09.979 "num_base_bdevs": 2, 00:10:09.979 "num_base_bdevs_discovered": 1, 00:10:09.979 "num_base_bdevs_operational": 2, 00:10:09.979 "base_bdevs_list": [ 00:10:09.979 { 00:10:09.979 "name": "BaseBdev1", 00:10:09.979 "uuid": "3166a402-ca11-49db-bfcb-02c58f415ae1", 00:10:09.979 "is_configured": true, 00:10:09.979 "data_offset": 2048, 00:10:09.979 "data_size": 63488 00:10:09.979 }, 00:10:09.979 { 00:10:09.979 "name": "BaseBdev2", 00:10:09.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.979 "is_configured": false, 00:10:09.979 "data_offset": 0, 00:10:09.979 "data_size": 0 00:10:09.979 } 00:10:09.979 ] 00:10:09.979 }' 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.979 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.545 [2024-11-15 10:38:40.858308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.545 [2024-11-15 10:38:40.858383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.545 [2024-11-15 10:38:40.866337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.545 [2024-11-15 10:38:40.868672] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.545 [2024-11-15 10:38:40.868736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.545 "name": "Existed_Raid", 00:10:10.545 "uuid": "f104749c-7f94-49db-ad42-2e6d994aaa81", 00:10:10.545 "strip_size_kb": 0, 00:10:10.545 "state": "configuring", 00:10:10.545 "raid_level": "raid1", 00:10:10.545 "superblock": true, 00:10:10.545 "num_base_bdevs": 2, 00:10:10.545 "num_base_bdevs_discovered": 1, 00:10:10.545 "num_base_bdevs_operational": 2, 00:10:10.545 "base_bdevs_list": [ 00:10:10.545 { 00:10:10.545 "name": "BaseBdev1", 00:10:10.545 "uuid": "3166a402-ca11-49db-bfcb-02c58f415ae1", 00:10:10.545 "is_configured": true, 00:10:10.545 "data_offset": 2048, 00:10:10.545 "data_size": 63488 00:10:10.545 }, 00:10:10.545 { 00:10:10.545 "name": "BaseBdev2", 00:10:10.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.545 "is_configured": false, 00:10:10.545 "data_offset": 0, 00:10:10.545 "data_size": 0 00:10:10.545 } 00:10:10.545 ] 00:10:10.545 }' 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.545 10:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.865 BaseBdev2 00:10:10.865 [2024-11-15 10:38:41.384792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.865 [2024-11-15 10:38:41.385101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:10.865 [2024-11-15 10:38:41.385122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.865 [2024-11-15 10:38:41.385471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:10.865 [2024-11-15 10:38:41.385675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:10.865 [2024-11-15 10:38:41.385697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:10.865 [2024-11-15 10:38:41.385883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.865 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.865 [ 00:10:11.140 { 00:10:11.140 "name": "BaseBdev2", 00:10:11.140 "aliases": [ 00:10:11.140 "1653e307-5e83-406d-bc7b-1a6405a2b4b0" 00:10:11.140 ], 00:10:11.140 "product_name": "Malloc disk", 00:10:11.140 "block_size": 512, 00:10:11.140 "num_blocks": 65536, 00:10:11.140 "uuid": "1653e307-5e83-406d-bc7b-1a6405a2b4b0", 00:10:11.140 "assigned_rate_limits": { 00:10:11.140 "rw_ios_per_sec": 0, 00:10:11.140 "rw_mbytes_per_sec": 0, 00:10:11.140 "r_mbytes_per_sec": 0, 00:10:11.140 "w_mbytes_per_sec": 0 00:10:11.140 }, 00:10:11.140 "claimed": true, 00:10:11.140 "claim_type": "exclusive_write", 00:10:11.140 "zoned": false, 00:10:11.140 "supported_io_types": { 00:10:11.140 "read": true, 00:10:11.140 "write": true, 00:10:11.140 "unmap": true, 00:10:11.140 "flush": true, 00:10:11.140 "reset": true, 00:10:11.140 "nvme_admin": false, 00:10:11.140 "nvme_io": false, 00:10:11.140 "nvme_io_md": false, 00:10:11.140 "write_zeroes": true, 00:10:11.140 "zcopy": true, 00:10:11.140 "get_zone_info": false, 00:10:11.140 "zone_management": false, 00:10:11.140 "zone_append": false, 00:10:11.140 "compare": false, 00:10:11.140 "compare_and_write": false, 00:10:11.140 "abort": true, 00:10:11.140 "seek_hole": false, 00:10:11.140 "seek_data": false, 00:10:11.140 "copy": true, 00:10:11.140 "nvme_iov_md": false 00:10:11.140 }, 00:10:11.140 "memory_domains": [ 00:10:11.140 { 00:10:11.140 "dma_device_id": "system", 00:10:11.140 "dma_device_type": 1 00:10:11.140 }, 00:10:11.140 { 00:10:11.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.140 "dma_device_type": 2 00:10:11.140 } 00:10:11.140 ], 00:10:11.140 "driver_specific": {} 00:10:11.140 } 00:10:11.140 ] 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.140 "name": "Existed_Raid", 00:10:11.140 "uuid": "f104749c-7f94-49db-ad42-2e6d994aaa81", 00:10:11.140 "strip_size_kb": 0, 00:10:11.140 "state": "online", 00:10:11.140 "raid_level": "raid1", 00:10:11.140 "superblock": true, 00:10:11.140 "num_base_bdevs": 2, 00:10:11.140 "num_base_bdevs_discovered": 2, 00:10:11.140 "num_base_bdevs_operational": 2, 00:10:11.140 "base_bdevs_list": [ 00:10:11.140 { 00:10:11.140 "name": "BaseBdev1", 00:10:11.140 "uuid": "3166a402-ca11-49db-bfcb-02c58f415ae1", 00:10:11.140 "is_configured": true, 00:10:11.140 "data_offset": 2048, 00:10:11.140 "data_size": 63488 00:10:11.140 }, 00:10:11.140 { 00:10:11.140 "name": "BaseBdev2", 00:10:11.140 "uuid": "1653e307-5e83-406d-bc7b-1a6405a2b4b0", 00:10:11.140 "is_configured": true, 00:10:11.140 "data_offset": 2048, 00:10:11.140 "data_size": 63488 00:10:11.140 } 00:10:11.140 ] 00:10:11.140 }' 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.140 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.399 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.399 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.399 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.399 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.399 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.399 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.399 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.399 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.399 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.399 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.399 [2024-11-15 10:38:41.949344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.658 10:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.658 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.658 "name": "Existed_Raid", 00:10:11.658 "aliases": [ 00:10:11.658 "f104749c-7f94-49db-ad42-2e6d994aaa81" 00:10:11.658 ], 00:10:11.658 "product_name": "Raid Volume", 00:10:11.658 "block_size": 512, 00:10:11.658 "num_blocks": 63488, 00:10:11.658 "uuid": "f104749c-7f94-49db-ad42-2e6d994aaa81", 00:10:11.658 "assigned_rate_limits": { 00:10:11.658 "rw_ios_per_sec": 0, 00:10:11.658 "rw_mbytes_per_sec": 0, 00:10:11.658 "r_mbytes_per_sec": 0, 00:10:11.658 "w_mbytes_per_sec": 0 00:10:11.658 }, 00:10:11.658 "claimed": false, 00:10:11.658 "zoned": false, 00:10:11.658 "supported_io_types": { 00:10:11.658 "read": true, 00:10:11.658 "write": true, 00:10:11.658 "unmap": false, 00:10:11.658 "flush": false, 00:10:11.658 "reset": true, 00:10:11.658 "nvme_admin": false, 00:10:11.658 "nvme_io": false, 00:10:11.658 "nvme_io_md": false, 00:10:11.658 "write_zeroes": true, 00:10:11.658 "zcopy": false, 00:10:11.658 "get_zone_info": false, 00:10:11.658 "zone_management": false, 00:10:11.658 "zone_append": false, 00:10:11.658 "compare": false, 00:10:11.658 "compare_and_write": false, 00:10:11.658 "abort": false, 00:10:11.658 "seek_hole": false, 00:10:11.658 "seek_data": false, 00:10:11.658 "copy": false, 00:10:11.658 "nvme_iov_md": false 00:10:11.658 }, 00:10:11.658 "memory_domains": [ 00:10:11.658 { 00:10:11.659 "dma_device_id": "system", 00:10:11.659 "dma_device_type": 1 00:10:11.659 }, 00:10:11.659 { 00:10:11.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.659 "dma_device_type": 2 00:10:11.659 }, 00:10:11.659 { 00:10:11.659 "dma_device_id": "system", 00:10:11.659 "dma_device_type": 1 00:10:11.659 }, 00:10:11.659 { 00:10:11.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.659 "dma_device_type": 2 00:10:11.659 } 00:10:11.659 ], 00:10:11.659 "driver_specific": { 00:10:11.659 "raid": { 00:10:11.659 "uuid": "f104749c-7f94-49db-ad42-2e6d994aaa81", 00:10:11.659 "strip_size_kb": 0, 00:10:11.659 "state": "online", 00:10:11.659 "raid_level": "raid1", 00:10:11.659 "superblock": true, 00:10:11.659 "num_base_bdevs": 2, 00:10:11.659 "num_base_bdevs_discovered": 2, 00:10:11.659 "num_base_bdevs_operational": 2, 00:10:11.659 "base_bdevs_list": [ 00:10:11.659 { 00:10:11.659 "name": "BaseBdev1", 00:10:11.659 "uuid": "3166a402-ca11-49db-bfcb-02c58f415ae1", 00:10:11.659 "is_configured": true, 00:10:11.659 "data_offset": 2048, 00:10:11.659 "data_size": 63488 00:10:11.659 }, 00:10:11.659 { 00:10:11.659 "name": "BaseBdev2", 00:10:11.659 "uuid": "1653e307-5e83-406d-bc7b-1a6405a2b4b0", 00:10:11.659 "is_configured": true, 00:10:11.659 "data_offset": 2048, 00:10:11.659 "data_size": 63488 00:10:11.659 } 00:10:11.659 ] 00:10:11.659 } 00:10:11.659 } 00:10:11.659 }' 00:10:11.659 10:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:11.659 BaseBdev2' 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.659 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.659 [2024-11-15 10:38:42.197102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.918 "name": "Existed_Raid", 00:10:11.918 "uuid": "f104749c-7f94-49db-ad42-2e6d994aaa81", 00:10:11.918 "strip_size_kb": 0, 00:10:11.918 "state": "online", 00:10:11.918 "raid_level": "raid1", 00:10:11.918 "superblock": true, 00:10:11.918 "num_base_bdevs": 2, 00:10:11.918 "num_base_bdevs_discovered": 1, 00:10:11.918 "num_base_bdevs_operational": 1, 00:10:11.918 "base_bdevs_list": [ 00:10:11.918 { 00:10:11.918 "name": null, 00:10:11.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.918 "is_configured": false, 00:10:11.918 "data_offset": 0, 00:10:11.918 "data_size": 63488 00:10:11.918 }, 00:10:11.918 { 00:10:11.918 "name": "BaseBdev2", 00:10:11.918 "uuid": "1653e307-5e83-406d-bc7b-1a6405a2b4b0", 00:10:11.918 "is_configured": true, 00:10:11.918 "data_offset": 2048, 00:10:11.918 "data_size": 63488 00:10:11.918 } 00:10:11.918 ] 00:10:11.918 }' 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.918 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.485 [2024-11-15 10:38:42.817971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.485 [2024-11-15 10:38:42.818252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.485 [2024-11-15 10:38:42.899112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.485 [2024-11-15 10:38:42.899404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.485 [2024-11-15 10:38:42.899570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63149 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63149 ']' 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63149 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63149 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:12.485 killing process with pid 63149 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63149' 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63149 00:10:12.485 [2024-11-15 10:38:42.989768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.485 10:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63149 00:10:12.485 [2024-11-15 10:38:43.004104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.862 10:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:13.862 00:10:13.862 real 0m5.396s 00:10:13.862 user 0m8.267s 00:10:13.862 sys 0m0.671s 00:10:13.862 ************************************ 00:10:13.862 END TEST raid_state_function_test_sb 00:10:13.862 ************************************ 00:10:13.862 10:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:13.862 10:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.862 10:38:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:13.862 10:38:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:13.862 10:38:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:13.862 10:38:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.862 ************************************ 00:10:13.862 START TEST raid_superblock_test 00:10:13.862 ************************************ 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63401 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63401 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63401 ']' 00:10:13.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:13.862 10:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.862 [2024-11-15 10:38:44.132641] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:10:13.862 [2024-11-15 10:38:44.133416] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63401 ] 00:10:13.862 [2024-11-15 10:38:44.307854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.862 [2024-11-15 10:38:44.413407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.121 [2024-11-15 10:38:44.593579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.121 [2024-11-15 10:38:44.593633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.687 malloc1 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.687 [2024-11-15 10:38:45.168151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.687 [2024-11-15 10:38:45.168230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.687 [2024-11-15 10:38:45.168264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:14.687 [2024-11-15 10:38:45.168281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.687 [2024-11-15 10:38:45.170897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.687 [2024-11-15 10:38:45.171092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:14.687 pt1 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.687 malloc2 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.687 [2024-11-15 10:38:45.215807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.687 [2024-11-15 10:38:45.216021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.687 [2024-11-15 10:38:45.216071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:14.687 [2024-11-15 10:38:45.216088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.687 [2024-11-15 10:38:45.218727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.687 [2024-11-15 10:38:45.218774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.687 pt2 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:14.687 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.688 [2024-11-15 10:38:45.227869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:14.688 [2024-11-15 10:38:45.230250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.688 [2024-11-15 10:38:45.230619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:14.688 [2024-11-15 10:38:45.230774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:14.688 [2024-11-15 10:38:45.231130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:14.688 [2024-11-15 10:38:45.231510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:14.688 [2024-11-15 10:38:45.231667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:14.688 [2024-11-15 10:38:45.232045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.688 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.946 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.946 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.946 "name": "raid_bdev1", 00:10:14.946 "uuid": "96e3294c-992f-49d9-9bd3-e0acb912baaa", 00:10:14.946 "strip_size_kb": 0, 00:10:14.946 "state": "online", 00:10:14.946 "raid_level": "raid1", 00:10:14.946 "superblock": true, 00:10:14.946 "num_base_bdevs": 2, 00:10:14.946 "num_base_bdevs_discovered": 2, 00:10:14.946 "num_base_bdevs_operational": 2, 00:10:14.946 "base_bdevs_list": [ 00:10:14.946 { 00:10:14.946 "name": "pt1", 00:10:14.946 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.946 "is_configured": true, 00:10:14.946 "data_offset": 2048, 00:10:14.946 "data_size": 63488 00:10:14.946 }, 00:10:14.946 { 00:10:14.946 "name": "pt2", 00:10:14.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.946 "is_configured": true, 00:10:14.946 "data_offset": 2048, 00:10:14.946 "data_size": 63488 00:10:14.946 } 00:10:14.946 ] 00:10:14.946 }' 00:10:14.946 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.947 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.205 [2024-11-15 10:38:45.728486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.205 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.463 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.463 "name": "raid_bdev1", 00:10:15.463 "aliases": [ 00:10:15.463 "96e3294c-992f-49d9-9bd3-e0acb912baaa" 00:10:15.463 ], 00:10:15.463 "product_name": "Raid Volume", 00:10:15.463 "block_size": 512, 00:10:15.463 "num_blocks": 63488, 00:10:15.463 "uuid": "96e3294c-992f-49d9-9bd3-e0acb912baaa", 00:10:15.463 "assigned_rate_limits": { 00:10:15.463 "rw_ios_per_sec": 0, 00:10:15.463 "rw_mbytes_per_sec": 0, 00:10:15.463 "r_mbytes_per_sec": 0, 00:10:15.463 "w_mbytes_per_sec": 0 00:10:15.463 }, 00:10:15.463 "claimed": false, 00:10:15.463 "zoned": false, 00:10:15.463 "supported_io_types": { 00:10:15.463 "read": true, 00:10:15.463 "write": true, 00:10:15.463 "unmap": false, 00:10:15.463 "flush": false, 00:10:15.463 "reset": true, 00:10:15.463 "nvme_admin": false, 00:10:15.463 "nvme_io": false, 00:10:15.463 "nvme_io_md": false, 00:10:15.463 "write_zeroes": true, 00:10:15.463 "zcopy": false, 00:10:15.463 "get_zone_info": false, 00:10:15.463 "zone_management": false, 00:10:15.463 "zone_append": false, 00:10:15.463 "compare": false, 00:10:15.463 "compare_and_write": false, 00:10:15.463 "abort": false, 00:10:15.463 "seek_hole": false, 00:10:15.463 "seek_data": false, 00:10:15.463 "copy": false, 00:10:15.463 "nvme_iov_md": false 00:10:15.463 }, 00:10:15.463 "memory_domains": [ 00:10:15.463 { 00:10:15.463 "dma_device_id": "system", 00:10:15.463 "dma_device_type": 1 00:10:15.463 }, 00:10:15.463 { 00:10:15.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.463 "dma_device_type": 2 00:10:15.463 }, 00:10:15.463 { 00:10:15.463 "dma_device_id": "system", 00:10:15.463 "dma_device_type": 1 00:10:15.463 }, 00:10:15.463 { 00:10:15.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.463 "dma_device_type": 2 00:10:15.463 } 00:10:15.463 ], 00:10:15.463 "driver_specific": { 00:10:15.463 "raid": { 00:10:15.463 "uuid": "96e3294c-992f-49d9-9bd3-e0acb912baaa", 00:10:15.463 "strip_size_kb": 0, 00:10:15.463 "state": "online", 00:10:15.463 "raid_level": "raid1", 00:10:15.463 "superblock": true, 00:10:15.463 "num_base_bdevs": 2, 00:10:15.463 "num_base_bdevs_discovered": 2, 00:10:15.463 "num_base_bdevs_operational": 2, 00:10:15.463 "base_bdevs_list": [ 00:10:15.463 { 00:10:15.463 "name": "pt1", 00:10:15.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.463 "is_configured": true, 00:10:15.463 "data_offset": 2048, 00:10:15.463 "data_size": 63488 00:10:15.463 }, 00:10:15.463 { 00:10:15.463 "name": "pt2", 00:10:15.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.464 "is_configured": true, 00:10:15.464 "data_offset": 2048, 00:10:15.464 "data_size": 63488 00:10:15.464 } 00:10:15.464 ] 00:10:15.464 } 00:10:15.464 } 00:10:15.464 }' 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:15.464 pt2' 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.464 10:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.464 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.464 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.464 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:15.464 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.464 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.464 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.464 [2024-11-15 10:38:46.020515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.722 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.722 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=96e3294c-992f-49d9-9bd3-e0acb912baaa 00:10:15.722 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 96e3294c-992f-49d9-9bd3-e0acb912baaa ']' 00:10:15.722 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.722 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.722 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.722 [2024-11-15 10:38:46.072158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.722 [2024-11-15 10:38:46.072308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.722 [2024-11-15 10:38:46.072544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.723 [2024-11-15 10:38:46.072731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.723 [2024-11-15 10:38:46.072881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.723 [2024-11-15 10:38:46.204215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:15.723 [2024-11-15 10:38:46.206726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:15.723 [2024-11-15 10:38:46.206819] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:15.723 [2024-11-15 10:38:46.206898] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:15.723 [2024-11-15 10:38:46.206925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.723 [2024-11-15 10:38:46.206941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:15.723 request: 00:10:15.723 { 00:10:15.723 "name": "raid_bdev1", 00:10:15.723 "raid_level": "raid1", 00:10:15.723 "base_bdevs": [ 00:10:15.723 "malloc1", 00:10:15.723 "malloc2" 00:10:15.723 ], 00:10:15.723 "superblock": false, 00:10:15.723 "method": "bdev_raid_create", 00:10:15.723 "req_id": 1 00:10:15.723 } 00:10:15.723 Got JSON-RPC error response 00:10:15.723 response: 00:10:15.723 { 00:10:15.723 "code": -17, 00:10:15.723 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:15.723 } 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.723 [2024-11-15 10:38:46.268222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:15.723 [2024-11-15 10:38:46.268435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.723 [2024-11-15 10:38:46.268582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:15.723 [2024-11-15 10:38:46.268710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.723 [2024-11-15 10:38:46.271456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.723 [2024-11-15 10:38:46.271622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:15.723 [2024-11-15 10:38:46.271737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:15.723 [2024-11-15 10:38:46.271816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:15.723 pt1 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.723 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.980 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.980 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.980 "name": "raid_bdev1", 00:10:15.980 "uuid": "96e3294c-992f-49d9-9bd3-e0acb912baaa", 00:10:15.980 "strip_size_kb": 0, 00:10:15.980 "state": "configuring", 00:10:15.980 "raid_level": "raid1", 00:10:15.980 "superblock": true, 00:10:15.980 "num_base_bdevs": 2, 00:10:15.980 "num_base_bdevs_discovered": 1, 00:10:15.980 "num_base_bdevs_operational": 2, 00:10:15.980 "base_bdevs_list": [ 00:10:15.980 { 00:10:15.980 "name": "pt1", 00:10:15.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.980 "is_configured": true, 00:10:15.980 "data_offset": 2048, 00:10:15.980 "data_size": 63488 00:10:15.980 }, 00:10:15.980 { 00:10:15.980 "name": null, 00:10:15.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.980 "is_configured": false, 00:10:15.980 "data_offset": 2048, 00:10:15.980 "data_size": 63488 00:10:15.980 } 00:10:15.980 ] 00:10:15.980 }' 00:10:15.980 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.980 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.238 [2024-11-15 10:38:46.788391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.238 [2024-11-15 10:38:46.788623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.238 [2024-11-15 10:38:46.788701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:16.238 [2024-11-15 10:38:46.788727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.238 [2024-11-15 10:38:46.789300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.238 [2024-11-15 10:38:46.789333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.238 [2024-11-15 10:38:46.789451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:16.238 [2024-11-15 10:38:46.789492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.238 [2024-11-15 10:38:46.789640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:16.238 [2024-11-15 10:38:46.789662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.238 [2024-11-15 10:38:46.789958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.238 [2024-11-15 10:38:46.790152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:16.238 [2024-11-15 10:38:46.790167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:16.238 [2024-11-15 10:38:46.790338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.238 pt2 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.238 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.495 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.495 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.495 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.495 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.495 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.495 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.495 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.495 "name": "raid_bdev1", 00:10:16.495 "uuid": "96e3294c-992f-49d9-9bd3-e0acb912baaa", 00:10:16.495 "strip_size_kb": 0, 00:10:16.495 "state": "online", 00:10:16.495 "raid_level": "raid1", 00:10:16.495 "superblock": true, 00:10:16.495 "num_base_bdevs": 2, 00:10:16.495 "num_base_bdevs_discovered": 2, 00:10:16.495 "num_base_bdevs_operational": 2, 00:10:16.495 "base_bdevs_list": [ 00:10:16.495 { 00:10:16.495 "name": "pt1", 00:10:16.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.495 "is_configured": true, 00:10:16.495 "data_offset": 2048, 00:10:16.495 "data_size": 63488 00:10:16.495 }, 00:10:16.495 { 00:10:16.495 "name": "pt2", 00:10:16.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.495 "is_configured": true, 00:10:16.495 "data_offset": 2048, 00:10:16.495 "data_size": 63488 00:10:16.495 } 00:10:16.495 ] 00:10:16.495 }' 00:10:16.495 10:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.495 10:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.754 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:16.754 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:16.754 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.754 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.013 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.013 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.013 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.013 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.013 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.013 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.014 [2024-11-15 10:38:47.320832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.014 "name": "raid_bdev1", 00:10:17.014 "aliases": [ 00:10:17.014 "96e3294c-992f-49d9-9bd3-e0acb912baaa" 00:10:17.014 ], 00:10:17.014 "product_name": "Raid Volume", 00:10:17.014 "block_size": 512, 00:10:17.014 "num_blocks": 63488, 00:10:17.014 "uuid": "96e3294c-992f-49d9-9bd3-e0acb912baaa", 00:10:17.014 "assigned_rate_limits": { 00:10:17.014 "rw_ios_per_sec": 0, 00:10:17.014 "rw_mbytes_per_sec": 0, 00:10:17.014 "r_mbytes_per_sec": 0, 00:10:17.014 "w_mbytes_per_sec": 0 00:10:17.014 }, 00:10:17.014 "claimed": false, 00:10:17.014 "zoned": false, 00:10:17.014 "supported_io_types": { 00:10:17.014 "read": true, 00:10:17.014 "write": true, 00:10:17.014 "unmap": false, 00:10:17.014 "flush": false, 00:10:17.014 "reset": true, 00:10:17.014 "nvme_admin": false, 00:10:17.014 "nvme_io": false, 00:10:17.014 "nvme_io_md": false, 00:10:17.014 "write_zeroes": true, 00:10:17.014 "zcopy": false, 00:10:17.014 "get_zone_info": false, 00:10:17.014 "zone_management": false, 00:10:17.014 "zone_append": false, 00:10:17.014 "compare": false, 00:10:17.014 "compare_and_write": false, 00:10:17.014 "abort": false, 00:10:17.014 "seek_hole": false, 00:10:17.014 "seek_data": false, 00:10:17.014 "copy": false, 00:10:17.014 "nvme_iov_md": false 00:10:17.014 }, 00:10:17.014 "memory_domains": [ 00:10:17.014 { 00:10:17.014 "dma_device_id": "system", 00:10:17.014 "dma_device_type": 1 00:10:17.014 }, 00:10:17.014 { 00:10:17.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.014 "dma_device_type": 2 00:10:17.014 }, 00:10:17.014 { 00:10:17.014 "dma_device_id": "system", 00:10:17.014 "dma_device_type": 1 00:10:17.014 }, 00:10:17.014 { 00:10:17.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.014 "dma_device_type": 2 00:10:17.014 } 00:10:17.014 ], 00:10:17.014 "driver_specific": { 00:10:17.014 "raid": { 00:10:17.014 "uuid": "96e3294c-992f-49d9-9bd3-e0acb912baaa", 00:10:17.014 "strip_size_kb": 0, 00:10:17.014 "state": "online", 00:10:17.014 "raid_level": "raid1", 00:10:17.014 "superblock": true, 00:10:17.014 "num_base_bdevs": 2, 00:10:17.014 "num_base_bdevs_discovered": 2, 00:10:17.014 "num_base_bdevs_operational": 2, 00:10:17.014 "base_bdevs_list": [ 00:10:17.014 { 00:10:17.014 "name": "pt1", 00:10:17.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.014 "is_configured": true, 00:10:17.014 "data_offset": 2048, 00:10:17.014 "data_size": 63488 00:10:17.014 }, 00:10:17.014 { 00:10:17.014 "name": "pt2", 00:10:17.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.014 "is_configured": true, 00:10:17.014 "data_offset": 2048, 00:10:17.014 "data_size": 63488 00:10:17.014 } 00:10:17.014 ] 00:10:17.014 } 00:10:17.014 } 00:10:17.014 }' 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:17.014 pt2' 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.014 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.272 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.272 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.273 [2024-11-15 10:38:47.576903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 96e3294c-992f-49d9-9bd3-e0acb912baaa '!=' 96e3294c-992f-49d9-9bd3-e0acb912baaa ']' 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.273 [2024-11-15 10:38:47.628666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.273 "name": "raid_bdev1", 00:10:17.273 "uuid": "96e3294c-992f-49d9-9bd3-e0acb912baaa", 00:10:17.273 "strip_size_kb": 0, 00:10:17.273 "state": "online", 00:10:17.273 "raid_level": "raid1", 00:10:17.273 "superblock": true, 00:10:17.273 "num_base_bdevs": 2, 00:10:17.273 "num_base_bdevs_discovered": 1, 00:10:17.273 "num_base_bdevs_operational": 1, 00:10:17.273 "base_bdevs_list": [ 00:10:17.273 { 00:10:17.273 "name": null, 00:10:17.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.273 "is_configured": false, 00:10:17.273 "data_offset": 0, 00:10:17.273 "data_size": 63488 00:10:17.273 }, 00:10:17.273 { 00:10:17.273 "name": "pt2", 00:10:17.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.273 "is_configured": true, 00:10:17.273 "data_offset": 2048, 00:10:17.273 "data_size": 63488 00:10:17.273 } 00:10:17.273 ] 00:10:17.273 }' 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.273 10:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.840 [2024-11-15 10:38:48.168795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.840 [2024-11-15 10:38:48.169021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.840 [2024-11-15 10:38:48.169163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.840 [2024-11-15 10:38:48.169231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.840 [2024-11-15 10:38:48.169260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.840 [2024-11-15 10:38:48.256747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.840 [2024-11-15 10:38:48.256941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.840 [2024-11-15 10:38:48.257079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:17.840 [2024-11-15 10:38:48.257203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.840 [2024-11-15 10:38:48.259941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.840 [2024-11-15 10:38:48.260108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.840 [2024-11-15 10:38:48.260323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:17.840 [2024-11-15 10:38:48.260525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.840 pt2 00:10:17.840 [2024-11-15 10:38:48.260808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:17.840 [2024-11-15 10:38:48.260843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.840 [2024-11-15 10:38:48.261142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:17.840 [2024-11-15 10:38:48.261341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:17.840 [2024-11-15 10:38:48.261378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:17.840 [2024-11-15 10:38:48.261606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.840 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.841 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.841 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.841 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.841 "name": "raid_bdev1", 00:10:17.841 "uuid": "96e3294c-992f-49d9-9bd3-e0acb912baaa", 00:10:17.841 "strip_size_kb": 0, 00:10:17.841 "state": "online", 00:10:17.841 "raid_level": "raid1", 00:10:17.841 "superblock": true, 00:10:17.841 "num_base_bdevs": 2, 00:10:17.841 "num_base_bdevs_discovered": 1, 00:10:17.841 "num_base_bdevs_operational": 1, 00:10:17.841 "base_bdevs_list": [ 00:10:17.841 { 00:10:17.841 "name": null, 00:10:17.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.841 "is_configured": false, 00:10:17.841 "data_offset": 2048, 00:10:17.841 "data_size": 63488 00:10:17.841 }, 00:10:17.841 { 00:10:17.841 "name": "pt2", 00:10:17.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.841 "is_configured": true, 00:10:17.841 "data_offset": 2048, 00:10:17.841 "data_size": 63488 00:10:17.841 } 00:10:17.841 ] 00:10:17.841 }' 00:10:17.841 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.841 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 [2024-11-15 10:38:48.785025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.408 [2024-11-15 10:38:48.785196] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.408 [2024-11-15 10:38:48.785317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.408 [2024-11-15 10:38:48.785408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.408 [2024-11-15 10:38:48.785426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 [2024-11-15 10:38:48.841095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:18.408 [2024-11-15 10:38:48.841344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.408 [2024-11-15 10:38:48.841438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:18.408 [2024-11-15 10:38:48.841579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.408 [2024-11-15 10:38:48.844476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.408 [2024-11-15 10:38:48.844644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:18.408 [2024-11-15 10:38:48.844910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:18.408 [2024-11-15 10:38:48.845076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:18.408 [2024-11-15 10:38:48.845424] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:18.408 [2024-11-15 10:38:48.845585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.408 [2024-11-15 10:38:48.845698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:18.408 pt1 00:10:18.408 [2024-11-15 10:38:48.845885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.408 [2024-11-15 10:38:48.846057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:18.408 [2024-11-15 10:38:48.846075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:18.408 [2024-11-15 10:38:48.846566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.408 [2024-11-15 10:38:48.846874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:18.408 [2024-11-15 10:38:48.846899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:18.408 [2024-11-15 10:38:48.847088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.408 "name": "raid_bdev1", 00:10:18.408 "uuid": "96e3294c-992f-49d9-9bd3-e0acb912baaa", 00:10:18.408 "strip_size_kb": 0, 00:10:18.408 "state": "online", 00:10:18.408 "raid_level": "raid1", 00:10:18.408 "superblock": true, 00:10:18.408 "num_base_bdevs": 2, 00:10:18.408 "num_base_bdevs_discovered": 1, 00:10:18.408 "num_base_bdevs_operational": 1, 00:10:18.408 "base_bdevs_list": [ 00:10:18.408 { 00:10:18.408 "name": null, 00:10:18.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.408 "is_configured": false, 00:10:18.408 "data_offset": 2048, 00:10:18.408 "data_size": 63488 00:10:18.408 }, 00:10:18.408 { 00:10:18.408 "name": "pt2", 00:10:18.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.408 "is_configured": true, 00:10:18.408 "data_offset": 2048, 00:10:18.408 "data_size": 63488 00:10:18.408 } 00:10:18.408 ] 00:10:18.408 }' 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.408 10:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.975 10:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:18.975 10:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:18.975 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.975 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.975 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.975 10:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.976 [2024-11-15 10:38:49.421959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 96e3294c-992f-49d9-9bd3-e0acb912baaa '!=' 96e3294c-992f-49d9-9bd3-e0acb912baaa ']' 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63401 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63401 ']' 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63401 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63401 00:10:18.976 killing process with pid 63401 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63401' 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63401 00:10:18.976 [2024-11-15 10:38:49.492299] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.976 [2024-11-15 10:38:49.492436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.976 [2024-11-15 10:38:49.492504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.976 [2024-11-15 10:38:49.492530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:18.976 10:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63401 00:10:19.234 [2024-11-15 10:38:49.666678] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.168 10:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:20.168 00:10:20.168 real 0m6.615s 00:10:20.168 user 0m10.612s 00:10:20.168 sys 0m0.831s 00:10:20.168 ************************************ 00:10:20.168 END TEST raid_superblock_test 00:10:20.168 ************************************ 00:10:20.168 10:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:20.168 10:38:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.168 10:38:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:10:20.168 10:38:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:20.168 10:38:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.168 10:38:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.168 ************************************ 00:10:20.168 START TEST raid_read_error_test 00:10:20.168 ************************************ 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.168 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JsgJ0GTIG6 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63731 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63731 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63731 ']' 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:20.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:20.169 10:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.427 [2024-11-15 10:38:50.850812] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:10:20.427 [2024-11-15 10:38:50.850992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63731 ] 00:10:20.685 [2024-11-15 10:38:51.039060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.685 [2024-11-15 10:38:51.170300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.944 [2024-11-15 10:38:51.351600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.944 [2024-11-15 10:38:51.351672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.597 BaseBdev1_malloc 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.597 true 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.597 [2024-11-15 10:38:51.949001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:21.597 [2024-11-15 10:38:51.949076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.597 [2024-11-15 10:38:51.949108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:21.597 [2024-11-15 10:38:51.949126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.597 [2024-11-15 10:38:51.951810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.597 [2024-11-15 10:38:51.952007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:21.597 BaseBdev1 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.597 BaseBdev2_malloc 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.597 true 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.597 10:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.597 [2024-11-15 10:38:52.000524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:21.597 [2024-11-15 10:38:52.000734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.597 [2024-11-15 10:38:52.000771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:21.597 [2024-11-15 10:38:52.000790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.597 [2024-11-15 10:38:52.003434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.597 [2024-11-15 10:38:52.003486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:21.597 BaseBdev2 00:10:21.597 10:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.597 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:21.597 10:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.597 10:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.597 [2024-11-15 10:38:52.008596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.597 [2024-11-15 10:38:52.010993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.597 [2024-11-15 10:38:52.011280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:21.597 [2024-11-15 10:38:52.011306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.598 [2024-11-15 10:38:52.011633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:21.598 [2024-11-15 10:38:52.011866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:21.598 [2024-11-15 10:38:52.011892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:21.598 [2024-11-15 10:38:52.012093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.598 "name": "raid_bdev1", 00:10:21.598 "uuid": "e634d4e3-df12-432b-a423-90a7b0790e6d", 00:10:21.598 "strip_size_kb": 0, 00:10:21.598 "state": "online", 00:10:21.598 "raid_level": "raid1", 00:10:21.598 "superblock": true, 00:10:21.598 "num_base_bdevs": 2, 00:10:21.598 "num_base_bdevs_discovered": 2, 00:10:21.598 "num_base_bdevs_operational": 2, 00:10:21.598 "base_bdevs_list": [ 00:10:21.598 { 00:10:21.598 "name": "BaseBdev1", 00:10:21.598 "uuid": "5eef15f3-e484-5dfa-bcb3-6b6d1e2f89fc", 00:10:21.598 "is_configured": true, 00:10:21.598 "data_offset": 2048, 00:10:21.598 "data_size": 63488 00:10:21.598 }, 00:10:21.598 { 00:10:21.598 "name": "BaseBdev2", 00:10:21.598 "uuid": "b811625b-0fbe-54b6-87a2-34781710721a", 00:10:21.598 "is_configured": true, 00:10:21.598 "data_offset": 2048, 00:10:21.598 "data_size": 63488 00:10:21.598 } 00:10:21.598 ] 00:10:21.598 }' 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.598 10:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.165 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:22.165 10:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:22.165 [2024-11-15 10:38:52.626019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.099 "name": "raid_bdev1", 00:10:23.099 "uuid": "e634d4e3-df12-432b-a423-90a7b0790e6d", 00:10:23.099 "strip_size_kb": 0, 00:10:23.099 "state": "online", 00:10:23.099 "raid_level": "raid1", 00:10:23.099 "superblock": true, 00:10:23.099 "num_base_bdevs": 2, 00:10:23.099 "num_base_bdevs_discovered": 2, 00:10:23.099 "num_base_bdevs_operational": 2, 00:10:23.099 "base_bdevs_list": [ 00:10:23.099 { 00:10:23.099 "name": "BaseBdev1", 00:10:23.099 "uuid": "5eef15f3-e484-5dfa-bcb3-6b6d1e2f89fc", 00:10:23.099 "is_configured": true, 00:10:23.099 "data_offset": 2048, 00:10:23.099 "data_size": 63488 00:10:23.099 }, 00:10:23.099 { 00:10:23.099 "name": "BaseBdev2", 00:10:23.099 "uuid": "b811625b-0fbe-54b6-87a2-34781710721a", 00:10:23.099 "is_configured": true, 00:10:23.099 "data_offset": 2048, 00:10:23.099 "data_size": 63488 00:10:23.099 } 00:10:23.099 ] 00:10:23.099 }' 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.099 10:38:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.666 10:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.666 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.666 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.666 [2024-11-15 10:38:54.018692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.666 [2024-11-15 10:38:54.018874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.666 [2024-11-15 10:38:54.022685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.666 [2024-11-15 10:38:54.022932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.666 [2024-11-15 10:38:54.023196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.666 [2024-11-15 10:38:54.023408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:10:23.666 "results": [ 00:10:23.666 { 00:10:23.666 "job": "raid_bdev1", 00:10:23.666 "core_mask": "0x1", 00:10:23.666 "workload": "randrw", 00:10:23.666 "percentage": 50, 00:10:23.666 "status": "finished", 00:10:23.666 "queue_depth": 1, 00:10:23.666 "io_size": 131072, 00:10:23.666 "runtime": 1.390547, 00:10:23.666 "iops": 12996.324467997127, 00:10:23.666 "mibps": 1624.5405584996408, 00:10:23.666 "io_failed": 0, 00:10:23.666 "io_timeout": 0, 00:10:23.666 "avg_latency_us": 72.18327015171637, 00:10:23.666 "min_latency_us": 45.14909090909091, 00:10:23.666 "max_latency_us": 1921.3963636363637 00:10:23.667 } 00:10:23.667 ], 00:10:23.667 "core_count": 1 00:10:23.667 } 00:10:23.667 te offline 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63731 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63731 ']' 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63731 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63731 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:23.667 killing process with pid 63731 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63731' 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63731 00:10:23.667 10:38:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63731 00:10:23.667 [2024-11-15 10:38:54.062587] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.667 [2024-11-15 10:38:54.176264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.043 10:38:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JsgJ0GTIG6 00:10:25.043 10:38:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.043 10:38:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.043 ************************************ 00:10:25.043 END TEST raid_read_error_test 00:10:25.043 ************************************ 00:10:25.043 10:38:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:25.043 10:38:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:25.043 10:38:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.043 10:38:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.043 10:38:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:25.043 00:10:25.043 real 0m4.534s 00:10:25.043 user 0m5.779s 00:10:25.043 sys 0m0.477s 00:10:25.043 10:38:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:25.043 10:38:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.043 10:38:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:10:25.043 10:38:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:25.043 10:38:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:25.043 10:38:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.043 ************************************ 00:10:25.043 START TEST raid_write_error_test 00:10:25.043 ************************************ 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:25.043 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xQ5zq1gxlE 00:10:25.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63878 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63878 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63878 ']' 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:25.044 10:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.044 [2024-11-15 10:38:55.388296] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:10:25.044 [2024-11-15 10:38:55.388460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63878 ] 00:10:25.044 [2024-11-15 10:38:55.560865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.303 [2024-11-15 10:38:55.687880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.561 [2024-11-15 10:38:55.901900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.561 [2024-11-15 10:38:55.901967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.820 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:25.820 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:25.820 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.820 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:25.820 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.820 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.079 BaseBdev1_malloc 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.079 true 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.079 [2024-11-15 10:38:56.412407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:26.079 [2024-11-15 10:38:56.412648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.079 [2024-11-15 10:38:56.412694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:26.079 [2024-11-15 10:38:56.412713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.079 [2024-11-15 10:38:56.415547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.079 BaseBdev1 00:10:26.079 [2024-11-15 10:38:56.415729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.079 BaseBdev2_malloc 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.079 true 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.079 [2024-11-15 10:38:56.465196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:26.079 [2024-11-15 10:38:56.465415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.079 [2024-11-15 10:38:56.465451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:26.079 [2024-11-15 10:38:56.465468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.079 [2024-11-15 10:38:56.468206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.079 [2024-11-15 10:38:56.468259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:26.079 BaseBdev2 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.079 [2024-11-15 10:38:56.473270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.079 [2024-11-15 10:38:56.475874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.079 [2024-11-15 10:38:56.476272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:26.079 [2024-11-15 10:38:56.476437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.079 [2024-11-15 10:38:56.476779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:26.079 [2024-11-15 10:38:56.477016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:26.079 [2024-11-15 10:38:56.477034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:26.079 [2024-11-15 10:38:56.477297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.079 "name": "raid_bdev1", 00:10:26.079 "uuid": "78c6241b-a7a1-421e-b622-ad0443dcf5fd", 00:10:26.079 "strip_size_kb": 0, 00:10:26.079 "state": "online", 00:10:26.079 "raid_level": "raid1", 00:10:26.079 "superblock": true, 00:10:26.079 "num_base_bdevs": 2, 00:10:26.079 "num_base_bdevs_discovered": 2, 00:10:26.079 "num_base_bdevs_operational": 2, 00:10:26.079 "base_bdevs_list": [ 00:10:26.079 { 00:10:26.079 "name": "BaseBdev1", 00:10:26.079 "uuid": "0089ee77-2145-5047-9aa3-f9ca0d9ffd27", 00:10:26.079 "is_configured": true, 00:10:26.079 "data_offset": 2048, 00:10:26.079 "data_size": 63488 00:10:26.079 }, 00:10:26.079 { 00:10:26.079 "name": "BaseBdev2", 00:10:26.079 "uuid": "97248319-1150-53b7-8b3f-f32402eca837", 00:10:26.079 "is_configured": true, 00:10:26.079 "data_offset": 2048, 00:10:26.079 "data_size": 63488 00:10:26.079 } 00:10:26.079 ] 00:10:26.079 }' 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.079 10:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.646 10:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:26.646 10:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:26.646 [2024-11-15 10:38:57.118816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.582 [2024-11-15 10:38:58.017769] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:27.582 [2024-11-15 10:38:58.017838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.582 [2024-11-15 10:38:58.018063] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.582 "name": "raid_bdev1", 00:10:27.582 "uuid": "78c6241b-a7a1-421e-b622-ad0443dcf5fd", 00:10:27.582 "strip_size_kb": 0, 00:10:27.582 "state": "online", 00:10:27.582 "raid_level": "raid1", 00:10:27.582 "superblock": true, 00:10:27.582 "num_base_bdevs": 2, 00:10:27.582 "num_base_bdevs_discovered": 1, 00:10:27.582 "num_base_bdevs_operational": 1, 00:10:27.582 "base_bdevs_list": [ 00:10:27.582 { 00:10:27.582 "name": null, 00:10:27.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.582 "is_configured": false, 00:10:27.582 "data_offset": 0, 00:10:27.582 "data_size": 63488 00:10:27.582 }, 00:10:27.582 { 00:10:27.582 "name": "BaseBdev2", 00:10:27.582 "uuid": "97248319-1150-53b7-8b3f-f32402eca837", 00:10:27.582 "is_configured": true, 00:10:27.582 "data_offset": 2048, 00:10:27.582 "data_size": 63488 00:10:27.582 } 00:10:27.582 ] 00:10:27.582 }' 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.582 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.150 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.150 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.150 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.150 [2024-11-15 10:38:58.554292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.150 [2024-11-15 10:38:58.554329] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.150 [2024-11-15 10:38:58.557895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.150 [2024-11-15 10:38:58.558067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.150 [2024-11-15 10:38:58.558194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.150 [2024-11-15 10:38:58.558445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:28.150 { 00:10:28.150 "results": [ 00:10:28.150 { 00:10:28.150 "job": "raid_bdev1", 00:10:28.150 "core_mask": "0x1", 00:10:28.150 "workload": "randrw", 00:10:28.150 "percentage": 50, 00:10:28.150 "status": "finished", 00:10:28.150 "queue_depth": 1, 00:10:28.150 "io_size": 131072, 00:10:28.150 "runtime": 1.433144, 00:10:28.150 "iops": 15729.75220912902, 00:10:28.150 "mibps": 1966.2190261411274, 00:10:28.150 "io_failed": 0, 00:10:28.150 "io_timeout": 0, 00:10:28.151 "avg_latency_us": 59.237556346860345, 00:10:28.151 "min_latency_us": 40.49454545454545, 00:10:28.151 "max_latency_us": 1854.370909090909 00:10:28.151 } 00:10:28.151 ], 00:10:28.151 "core_count": 1 00:10:28.151 } 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63878 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63878 ']' 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63878 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63878 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:28.151 killing process with pid 63878 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63878' 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63878 00:10:28.151 [2024-11-15 10:38:58.598210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.151 10:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63878 00:10:28.409 [2024-11-15 10:38:58.710601] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.432 10:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xQ5zq1gxlE 00:10:29.432 10:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:29.432 10:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.432 10:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:29.432 10:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:29.432 10:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.432 10:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:29.432 10:38:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:29.432 00:10:29.432 real 0m4.473s 00:10:29.432 user 0m5.714s 00:10:29.432 sys 0m0.450s 00:10:29.432 10:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:29.432 10:38:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.432 ************************************ 00:10:29.432 END TEST raid_write_error_test 00:10:29.432 ************************************ 00:10:29.432 10:38:59 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:29.432 10:38:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:29.432 10:38:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:29.432 10:38:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:29.432 10:38:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.432 10:38:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.432 ************************************ 00:10:29.432 START TEST raid_state_function_test 00:10:29.432 ************************************ 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:29.432 Process raid pid: 64020 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64020 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64020' 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64020 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 64020 ']' 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:29.432 10:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.432 [2024-11-15 10:38:59.911050] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:10:29.432 [2024-11-15 10:38:59.911389] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.690 [2024-11-15 10:39:00.093235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.690 [2024-11-15 10:39:00.220538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.949 [2024-11-15 10:39:00.429197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.949 [2024-11-15 10:39:00.429257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.515 [2024-11-15 10:39:00.904212] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.515 [2024-11-15 10:39:00.904280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.515 [2024-11-15 10:39:00.904298] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.515 [2024-11-15 10:39:00.904314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.515 [2024-11-15 10:39:00.904324] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.515 [2024-11-15 10:39:00.904338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.515 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.515 "name": "Existed_Raid", 00:10:30.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.515 "strip_size_kb": 64, 00:10:30.515 "state": "configuring", 00:10:30.515 "raid_level": "raid0", 00:10:30.515 "superblock": false, 00:10:30.515 "num_base_bdevs": 3, 00:10:30.515 "num_base_bdevs_discovered": 0, 00:10:30.515 "num_base_bdevs_operational": 3, 00:10:30.515 "base_bdevs_list": [ 00:10:30.515 { 00:10:30.515 "name": "BaseBdev1", 00:10:30.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.515 "is_configured": false, 00:10:30.515 "data_offset": 0, 00:10:30.515 "data_size": 0 00:10:30.515 }, 00:10:30.515 { 00:10:30.515 "name": "BaseBdev2", 00:10:30.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.515 "is_configured": false, 00:10:30.515 "data_offset": 0, 00:10:30.515 "data_size": 0 00:10:30.515 }, 00:10:30.515 { 00:10:30.515 "name": "BaseBdev3", 00:10:30.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.515 "is_configured": false, 00:10:30.516 "data_offset": 0, 00:10:30.516 "data_size": 0 00:10:30.516 } 00:10:30.516 ] 00:10:30.516 }' 00:10:30.516 10:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.516 10:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 [2024-11-15 10:39:01.408282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.082 [2024-11-15 10:39:01.408327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 [2024-11-15 10:39:01.416263] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.082 [2024-11-15 10:39:01.416482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.082 [2024-11-15 10:39:01.416633] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.082 [2024-11-15 10:39:01.416767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.082 [2024-11-15 10:39:01.416882] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.082 [2024-11-15 10:39:01.416917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 [2024-11-15 10:39:01.456875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.082 BaseBdev1 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 [ 00:10:31.082 { 00:10:31.082 "name": "BaseBdev1", 00:10:31.082 "aliases": [ 00:10:31.082 "6d68306f-279f-45cc-a013-3f890ecd0ce3" 00:10:31.082 ], 00:10:31.082 "product_name": "Malloc disk", 00:10:31.082 "block_size": 512, 00:10:31.082 "num_blocks": 65536, 00:10:31.082 "uuid": "6d68306f-279f-45cc-a013-3f890ecd0ce3", 00:10:31.082 "assigned_rate_limits": { 00:10:31.082 "rw_ios_per_sec": 0, 00:10:31.082 "rw_mbytes_per_sec": 0, 00:10:31.082 "r_mbytes_per_sec": 0, 00:10:31.082 "w_mbytes_per_sec": 0 00:10:31.082 }, 00:10:31.082 "claimed": true, 00:10:31.082 "claim_type": "exclusive_write", 00:10:31.082 "zoned": false, 00:10:31.082 "supported_io_types": { 00:10:31.082 "read": true, 00:10:31.082 "write": true, 00:10:31.082 "unmap": true, 00:10:31.082 "flush": true, 00:10:31.082 "reset": true, 00:10:31.083 "nvme_admin": false, 00:10:31.083 "nvme_io": false, 00:10:31.083 "nvme_io_md": false, 00:10:31.083 "write_zeroes": true, 00:10:31.083 "zcopy": true, 00:10:31.083 "get_zone_info": false, 00:10:31.083 "zone_management": false, 00:10:31.083 "zone_append": false, 00:10:31.083 "compare": false, 00:10:31.083 "compare_and_write": false, 00:10:31.083 "abort": true, 00:10:31.083 "seek_hole": false, 00:10:31.083 "seek_data": false, 00:10:31.083 "copy": true, 00:10:31.083 "nvme_iov_md": false 00:10:31.083 }, 00:10:31.083 "memory_domains": [ 00:10:31.083 { 00:10:31.083 "dma_device_id": "system", 00:10:31.083 "dma_device_type": 1 00:10:31.083 }, 00:10:31.083 { 00:10:31.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.083 "dma_device_type": 2 00:10:31.083 } 00:10:31.083 ], 00:10:31.083 "driver_specific": {} 00:10:31.083 } 00:10:31.083 ] 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.083 "name": "Existed_Raid", 00:10:31.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.083 "strip_size_kb": 64, 00:10:31.083 "state": "configuring", 00:10:31.083 "raid_level": "raid0", 00:10:31.083 "superblock": false, 00:10:31.083 "num_base_bdevs": 3, 00:10:31.083 "num_base_bdevs_discovered": 1, 00:10:31.083 "num_base_bdevs_operational": 3, 00:10:31.083 "base_bdevs_list": [ 00:10:31.083 { 00:10:31.083 "name": "BaseBdev1", 00:10:31.083 "uuid": "6d68306f-279f-45cc-a013-3f890ecd0ce3", 00:10:31.083 "is_configured": true, 00:10:31.083 "data_offset": 0, 00:10:31.083 "data_size": 65536 00:10:31.083 }, 00:10:31.083 { 00:10:31.083 "name": "BaseBdev2", 00:10:31.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.083 "is_configured": false, 00:10:31.083 "data_offset": 0, 00:10:31.083 "data_size": 0 00:10:31.083 }, 00:10:31.083 { 00:10:31.083 "name": "BaseBdev3", 00:10:31.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.083 "is_configured": false, 00:10:31.083 "data_offset": 0, 00:10:31.083 "data_size": 0 00:10:31.083 } 00:10:31.083 ] 00:10:31.083 }' 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.083 10:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.649 [2024-11-15 10:39:02.045072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.649 [2024-11-15 10:39:02.045275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.649 [2024-11-15 10:39:02.057130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.649 [2024-11-15 10:39:02.059499] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.649 [2024-11-15 10:39:02.059675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.649 [2024-11-15 10:39:02.059799] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.649 [2024-11-15 10:39:02.059933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.649 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.650 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.650 "name": "Existed_Raid", 00:10:31.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.650 "strip_size_kb": 64, 00:10:31.650 "state": "configuring", 00:10:31.650 "raid_level": "raid0", 00:10:31.650 "superblock": false, 00:10:31.650 "num_base_bdevs": 3, 00:10:31.650 "num_base_bdevs_discovered": 1, 00:10:31.650 "num_base_bdevs_operational": 3, 00:10:31.650 "base_bdevs_list": [ 00:10:31.650 { 00:10:31.650 "name": "BaseBdev1", 00:10:31.650 "uuid": "6d68306f-279f-45cc-a013-3f890ecd0ce3", 00:10:31.650 "is_configured": true, 00:10:31.650 "data_offset": 0, 00:10:31.650 "data_size": 65536 00:10:31.650 }, 00:10:31.650 { 00:10:31.650 "name": "BaseBdev2", 00:10:31.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.650 "is_configured": false, 00:10:31.650 "data_offset": 0, 00:10:31.650 "data_size": 0 00:10:31.650 }, 00:10:31.650 { 00:10:31.650 "name": "BaseBdev3", 00:10:31.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.650 "is_configured": false, 00:10:31.650 "data_offset": 0, 00:10:31.650 "data_size": 0 00:10:31.650 } 00:10:31.650 ] 00:10:31.650 }' 00:10:31.650 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.650 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.217 BaseBdev2 00:10:32.217 [2024-11-15 10:39:02.611555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.217 [ 00:10:32.217 { 00:10:32.217 "name": "BaseBdev2", 00:10:32.217 "aliases": [ 00:10:32.217 "be1f7e10-9fc3-46d2-b991-3b3ca0e2c2fc" 00:10:32.217 ], 00:10:32.217 "product_name": "Malloc disk", 00:10:32.217 "block_size": 512, 00:10:32.217 "num_blocks": 65536, 00:10:32.217 "uuid": "be1f7e10-9fc3-46d2-b991-3b3ca0e2c2fc", 00:10:32.217 "assigned_rate_limits": { 00:10:32.217 "rw_ios_per_sec": 0, 00:10:32.217 "rw_mbytes_per_sec": 0, 00:10:32.217 "r_mbytes_per_sec": 0, 00:10:32.217 "w_mbytes_per_sec": 0 00:10:32.217 }, 00:10:32.217 "claimed": true, 00:10:32.217 "claim_type": "exclusive_write", 00:10:32.217 "zoned": false, 00:10:32.217 "supported_io_types": { 00:10:32.217 "read": true, 00:10:32.217 "write": true, 00:10:32.217 "unmap": true, 00:10:32.217 "flush": true, 00:10:32.217 "reset": true, 00:10:32.217 "nvme_admin": false, 00:10:32.217 "nvme_io": false, 00:10:32.217 "nvme_io_md": false, 00:10:32.217 "write_zeroes": true, 00:10:32.217 "zcopy": true, 00:10:32.217 "get_zone_info": false, 00:10:32.217 "zone_management": false, 00:10:32.217 "zone_append": false, 00:10:32.217 "compare": false, 00:10:32.217 "compare_and_write": false, 00:10:32.217 "abort": true, 00:10:32.217 "seek_hole": false, 00:10:32.217 "seek_data": false, 00:10:32.217 "copy": true, 00:10:32.217 "nvme_iov_md": false 00:10:32.217 }, 00:10:32.217 "memory_domains": [ 00:10:32.217 { 00:10:32.217 "dma_device_id": "system", 00:10:32.217 "dma_device_type": 1 00:10:32.217 }, 00:10:32.217 { 00:10:32.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.217 "dma_device_type": 2 00:10:32.217 } 00:10:32.217 ], 00:10:32.217 "driver_specific": {} 00:10:32.217 } 00:10:32.217 ] 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.217 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.217 "name": "Existed_Raid", 00:10:32.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.217 "strip_size_kb": 64, 00:10:32.217 "state": "configuring", 00:10:32.217 "raid_level": "raid0", 00:10:32.217 "superblock": false, 00:10:32.218 "num_base_bdevs": 3, 00:10:32.218 "num_base_bdevs_discovered": 2, 00:10:32.218 "num_base_bdevs_operational": 3, 00:10:32.218 "base_bdevs_list": [ 00:10:32.218 { 00:10:32.218 "name": "BaseBdev1", 00:10:32.218 "uuid": "6d68306f-279f-45cc-a013-3f890ecd0ce3", 00:10:32.218 "is_configured": true, 00:10:32.218 "data_offset": 0, 00:10:32.218 "data_size": 65536 00:10:32.218 }, 00:10:32.218 { 00:10:32.218 "name": "BaseBdev2", 00:10:32.218 "uuid": "be1f7e10-9fc3-46d2-b991-3b3ca0e2c2fc", 00:10:32.218 "is_configured": true, 00:10:32.218 "data_offset": 0, 00:10:32.218 "data_size": 65536 00:10:32.218 }, 00:10:32.218 { 00:10:32.218 "name": "BaseBdev3", 00:10:32.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.218 "is_configured": false, 00:10:32.218 "data_offset": 0, 00:10:32.218 "data_size": 0 00:10:32.218 } 00:10:32.218 ] 00:10:32.218 }' 00:10:32.218 10:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.218 10:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.784 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:32.784 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.784 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.785 [2024-11-15 10:39:03.224138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.785 [2024-11-15 10:39:03.224211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:32.785 [2024-11-15 10:39:03.224235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:32.785 [2024-11-15 10:39:03.224671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:32.785 BaseBdev3 00:10:32.785 [2024-11-15 10:39:03.224927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:32.785 [2024-11-15 10:39:03.224955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:32.785 [2024-11-15 10:39:03.225325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.785 [ 00:10:32.785 { 00:10:32.785 "name": "BaseBdev3", 00:10:32.785 "aliases": [ 00:10:32.785 "05920189-7e2d-40de-ac11-8429725b469b" 00:10:32.785 ], 00:10:32.785 "product_name": "Malloc disk", 00:10:32.785 "block_size": 512, 00:10:32.785 "num_blocks": 65536, 00:10:32.785 "uuid": "05920189-7e2d-40de-ac11-8429725b469b", 00:10:32.785 "assigned_rate_limits": { 00:10:32.785 "rw_ios_per_sec": 0, 00:10:32.785 "rw_mbytes_per_sec": 0, 00:10:32.785 "r_mbytes_per_sec": 0, 00:10:32.785 "w_mbytes_per_sec": 0 00:10:32.785 }, 00:10:32.785 "claimed": true, 00:10:32.785 "claim_type": "exclusive_write", 00:10:32.785 "zoned": false, 00:10:32.785 "supported_io_types": { 00:10:32.785 "read": true, 00:10:32.785 "write": true, 00:10:32.785 "unmap": true, 00:10:32.785 "flush": true, 00:10:32.785 "reset": true, 00:10:32.785 "nvme_admin": false, 00:10:32.785 "nvme_io": false, 00:10:32.785 "nvme_io_md": false, 00:10:32.785 "write_zeroes": true, 00:10:32.785 "zcopy": true, 00:10:32.785 "get_zone_info": false, 00:10:32.785 "zone_management": false, 00:10:32.785 "zone_append": false, 00:10:32.785 "compare": false, 00:10:32.785 "compare_and_write": false, 00:10:32.785 "abort": true, 00:10:32.785 "seek_hole": false, 00:10:32.785 "seek_data": false, 00:10:32.785 "copy": true, 00:10:32.785 "nvme_iov_md": false 00:10:32.785 }, 00:10:32.785 "memory_domains": [ 00:10:32.785 { 00:10:32.785 "dma_device_id": "system", 00:10:32.785 "dma_device_type": 1 00:10:32.785 }, 00:10:32.785 { 00:10:32.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.785 "dma_device_type": 2 00:10:32.785 } 00:10:32.785 ], 00:10:32.785 "driver_specific": {} 00:10:32.785 } 00:10:32.785 ] 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.785 "name": "Existed_Raid", 00:10:32.785 "uuid": "9d0ca413-e74f-448c-bf60-c2a92bf12a5e", 00:10:32.785 "strip_size_kb": 64, 00:10:32.785 "state": "online", 00:10:32.785 "raid_level": "raid0", 00:10:32.785 "superblock": false, 00:10:32.785 "num_base_bdevs": 3, 00:10:32.785 "num_base_bdevs_discovered": 3, 00:10:32.785 "num_base_bdevs_operational": 3, 00:10:32.785 "base_bdevs_list": [ 00:10:32.785 { 00:10:32.785 "name": "BaseBdev1", 00:10:32.785 "uuid": "6d68306f-279f-45cc-a013-3f890ecd0ce3", 00:10:32.785 "is_configured": true, 00:10:32.785 "data_offset": 0, 00:10:32.785 "data_size": 65536 00:10:32.785 }, 00:10:32.785 { 00:10:32.785 "name": "BaseBdev2", 00:10:32.785 "uuid": "be1f7e10-9fc3-46d2-b991-3b3ca0e2c2fc", 00:10:32.785 "is_configured": true, 00:10:32.785 "data_offset": 0, 00:10:32.785 "data_size": 65536 00:10:32.785 }, 00:10:32.785 { 00:10:32.785 "name": "BaseBdev3", 00:10:32.785 "uuid": "05920189-7e2d-40de-ac11-8429725b469b", 00:10:32.785 "is_configured": true, 00:10:32.785 "data_offset": 0, 00:10:32.785 "data_size": 65536 00:10:32.785 } 00:10:32.785 ] 00:10:32.785 }' 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.785 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.352 [2024-11-15 10:39:03.804770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.352 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.352 "name": "Existed_Raid", 00:10:33.352 "aliases": [ 00:10:33.352 "9d0ca413-e74f-448c-bf60-c2a92bf12a5e" 00:10:33.352 ], 00:10:33.352 "product_name": "Raid Volume", 00:10:33.352 "block_size": 512, 00:10:33.352 "num_blocks": 196608, 00:10:33.352 "uuid": "9d0ca413-e74f-448c-bf60-c2a92bf12a5e", 00:10:33.352 "assigned_rate_limits": { 00:10:33.352 "rw_ios_per_sec": 0, 00:10:33.352 "rw_mbytes_per_sec": 0, 00:10:33.352 "r_mbytes_per_sec": 0, 00:10:33.352 "w_mbytes_per_sec": 0 00:10:33.352 }, 00:10:33.352 "claimed": false, 00:10:33.352 "zoned": false, 00:10:33.352 "supported_io_types": { 00:10:33.352 "read": true, 00:10:33.352 "write": true, 00:10:33.352 "unmap": true, 00:10:33.352 "flush": true, 00:10:33.352 "reset": true, 00:10:33.352 "nvme_admin": false, 00:10:33.352 "nvme_io": false, 00:10:33.352 "nvme_io_md": false, 00:10:33.352 "write_zeroes": true, 00:10:33.352 "zcopy": false, 00:10:33.352 "get_zone_info": false, 00:10:33.352 "zone_management": false, 00:10:33.352 "zone_append": false, 00:10:33.352 "compare": false, 00:10:33.352 "compare_and_write": false, 00:10:33.352 "abort": false, 00:10:33.352 "seek_hole": false, 00:10:33.352 "seek_data": false, 00:10:33.352 "copy": false, 00:10:33.352 "nvme_iov_md": false 00:10:33.352 }, 00:10:33.352 "memory_domains": [ 00:10:33.352 { 00:10:33.352 "dma_device_id": "system", 00:10:33.352 "dma_device_type": 1 00:10:33.352 }, 00:10:33.352 { 00:10:33.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.352 "dma_device_type": 2 00:10:33.352 }, 00:10:33.352 { 00:10:33.352 "dma_device_id": "system", 00:10:33.352 "dma_device_type": 1 00:10:33.352 }, 00:10:33.352 { 00:10:33.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.352 "dma_device_type": 2 00:10:33.352 }, 00:10:33.353 { 00:10:33.353 "dma_device_id": "system", 00:10:33.353 "dma_device_type": 1 00:10:33.353 }, 00:10:33.353 { 00:10:33.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.353 "dma_device_type": 2 00:10:33.353 } 00:10:33.353 ], 00:10:33.353 "driver_specific": { 00:10:33.353 "raid": { 00:10:33.353 "uuid": "9d0ca413-e74f-448c-bf60-c2a92bf12a5e", 00:10:33.353 "strip_size_kb": 64, 00:10:33.353 "state": "online", 00:10:33.353 "raid_level": "raid0", 00:10:33.353 "superblock": false, 00:10:33.353 "num_base_bdevs": 3, 00:10:33.353 "num_base_bdevs_discovered": 3, 00:10:33.353 "num_base_bdevs_operational": 3, 00:10:33.353 "base_bdevs_list": [ 00:10:33.353 { 00:10:33.353 "name": "BaseBdev1", 00:10:33.353 "uuid": "6d68306f-279f-45cc-a013-3f890ecd0ce3", 00:10:33.353 "is_configured": true, 00:10:33.353 "data_offset": 0, 00:10:33.353 "data_size": 65536 00:10:33.353 }, 00:10:33.353 { 00:10:33.353 "name": "BaseBdev2", 00:10:33.353 "uuid": "be1f7e10-9fc3-46d2-b991-3b3ca0e2c2fc", 00:10:33.353 "is_configured": true, 00:10:33.353 "data_offset": 0, 00:10:33.353 "data_size": 65536 00:10:33.353 }, 00:10:33.353 { 00:10:33.353 "name": "BaseBdev3", 00:10:33.353 "uuid": "05920189-7e2d-40de-ac11-8429725b469b", 00:10:33.353 "is_configured": true, 00:10:33.353 "data_offset": 0, 00:10:33.353 "data_size": 65536 00:10:33.353 } 00:10:33.353 ] 00:10:33.353 } 00:10:33.353 } 00:10:33.353 }' 00:10:33.353 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.353 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:33.353 BaseBdev2 00:10:33.353 BaseBdev3' 00:10:33.353 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.612 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.612 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.612 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:33.612 10:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.612 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.612 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 10:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.612 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 [2024-11-15 10:39:04.156538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:33.612 [2024-11-15 10:39:04.156706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.612 [2024-11-15 10:39:04.156916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.871 "name": "Existed_Raid", 00:10:33.871 "uuid": "9d0ca413-e74f-448c-bf60-c2a92bf12a5e", 00:10:33.871 "strip_size_kb": 64, 00:10:33.871 "state": "offline", 00:10:33.871 "raid_level": "raid0", 00:10:33.871 "superblock": false, 00:10:33.871 "num_base_bdevs": 3, 00:10:33.871 "num_base_bdevs_discovered": 2, 00:10:33.871 "num_base_bdevs_operational": 2, 00:10:33.871 "base_bdevs_list": [ 00:10:33.871 { 00:10:33.871 "name": null, 00:10:33.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.871 "is_configured": false, 00:10:33.871 "data_offset": 0, 00:10:33.871 "data_size": 65536 00:10:33.871 }, 00:10:33.871 { 00:10:33.871 "name": "BaseBdev2", 00:10:33.871 "uuid": "be1f7e10-9fc3-46d2-b991-3b3ca0e2c2fc", 00:10:33.871 "is_configured": true, 00:10:33.871 "data_offset": 0, 00:10:33.871 "data_size": 65536 00:10:33.871 }, 00:10:33.871 { 00:10:33.871 "name": "BaseBdev3", 00:10:33.871 "uuid": "05920189-7e2d-40de-ac11-8429725b469b", 00:10:33.871 "is_configured": true, 00:10:33.871 "data_offset": 0, 00:10:33.871 "data_size": 65536 00:10:33.871 } 00:10:33.871 ] 00:10:33.871 }' 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.871 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.437 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:34.437 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.437 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.437 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.437 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:34.437 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.437 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.437 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.438 [2024-11-15 10:39:04.777111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.438 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.438 [2024-11-15 10:39:04.913030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:34.438 [2024-11-15 10:39:04.913232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:34.696 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.696 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:34.696 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.696 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.696 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.696 10:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.696 10:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.696 BaseBdev2 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.696 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.696 [ 00:10:34.696 { 00:10:34.696 "name": "BaseBdev2", 00:10:34.696 "aliases": [ 00:10:34.696 "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd" 00:10:34.697 ], 00:10:34.697 "product_name": "Malloc disk", 00:10:34.697 "block_size": 512, 00:10:34.697 "num_blocks": 65536, 00:10:34.697 "uuid": "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd", 00:10:34.697 "assigned_rate_limits": { 00:10:34.697 "rw_ios_per_sec": 0, 00:10:34.697 "rw_mbytes_per_sec": 0, 00:10:34.697 "r_mbytes_per_sec": 0, 00:10:34.697 "w_mbytes_per_sec": 0 00:10:34.697 }, 00:10:34.697 "claimed": false, 00:10:34.697 "zoned": false, 00:10:34.697 "supported_io_types": { 00:10:34.697 "read": true, 00:10:34.697 "write": true, 00:10:34.697 "unmap": true, 00:10:34.697 "flush": true, 00:10:34.697 "reset": true, 00:10:34.697 "nvme_admin": false, 00:10:34.697 "nvme_io": false, 00:10:34.697 "nvme_io_md": false, 00:10:34.697 "write_zeroes": true, 00:10:34.697 "zcopy": true, 00:10:34.697 "get_zone_info": false, 00:10:34.697 "zone_management": false, 00:10:34.697 "zone_append": false, 00:10:34.697 "compare": false, 00:10:34.697 "compare_and_write": false, 00:10:34.697 "abort": true, 00:10:34.697 "seek_hole": false, 00:10:34.697 "seek_data": false, 00:10:34.697 "copy": true, 00:10:34.697 "nvme_iov_md": false 00:10:34.697 }, 00:10:34.697 "memory_domains": [ 00:10:34.697 { 00:10:34.697 "dma_device_id": "system", 00:10:34.697 "dma_device_type": 1 00:10:34.697 }, 00:10:34.697 { 00:10:34.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.697 "dma_device_type": 2 00:10:34.697 } 00:10:34.697 ], 00:10:34.697 "driver_specific": {} 00:10:34.697 } 00:10:34.697 ] 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.697 BaseBdev3 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.697 [ 00:10:34.697 { 00:10:34.697 "name": "BaseBdev3", 00:10:34.697 "aliases": [ 00:10:34.697 "33eae3fc-dafc-4f24-83da-8d98cd338124" 00:10:34.697 ], 00:10:34.697 "product_name": "Malloc disk", 00:10:34.697 "block_size": 512, 00:10:34.697 "num_blocks": 65536, 00:10:34.697 "uuid": "33eae3fc-dafc-4f24-83da-8d98cd338124", 00:10:34.697 "assigned_rate_limits": { 00:10:34.697 "rw_ios_per_sec": 0, 00:10:34.697 "rw_mbytes_per_sec": 0, 00:10:34.697 "r_mbytes_per_sec": 0, 00:10:34.697 "w_mbytes_per_sec": 0 00:10:34.697 }, 00:10:34.697 "claimed": false, 00:10:34.697 "zoned": false, 00:10:34.697 "supported_io_types": { 00:10:34.697 "read": true, 00:10:34.697 "write": true, 00:10:34.697 "unmap": true, 00:10:34.697 "flush": true, 00:10:34.697 "reset": true, 00:10:34.697 "nvme_admin": false, 00:10:34.697 "nvme_io": false, 00:10:34.697 "nvme_io_md": false, 00:10:34.697 "write_zeroes": true, 00:10:34.697 "zcopy": true, 00:10:34.697 "get_zone_info": false, 00:10:34.697 "zone_management": false, 00:10:34.697 "zone_append": false, 00:10:34.697 "compare": false, 00:10:34.697 "compare_and_write": false, 00:10:34.697 "abort": true, 00:10:34.697 "seek_hole": false, 00:10:34.697 "seek_data": false, 00:10:34.697 "copy": true, 00:10:34.697 "nvme_iov_md": false 00:10:34.697 }, 00:10:34.697 "memory_domains": [ 00:10:34.697 { 00:10:34.697 "dma_device_id": "system", 00:10:34.697 "dma_device_type": 1 00:10:34.697 }, 00:10:34.697 { 00:10:34.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.697 "dma_device_type": 2 00:10:34.697 } 00:10:34.697 ], 00:10:34.697 "driver_specific": {} 00:10:34.697 } 00:10:34.697 ] 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.697 [2024-11-15 10:39:05.194666] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.697 [2024-11-15 10:39:05.194842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.697 [2024-11-15 10:39:05.194972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.697 [2024-11-15 10:39:05.197305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.697 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.697 "name": "Existed_Raid", 00:10:34.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.697 "strip_size_kb": 64, 00:10:34.697 "state": "configuring", 00:10:34.697 "raid_level": "raid0", 00:10:34.697 "superblock": false, 00:10:34.697 "num_base_bdevs": 3, 00:10:34.697 "num_base_bdevs_discovered": 2, 00:10:34.697 "num_base_bdevs_operational": 3, 00:10:34.697 "base_bdevs_list": [ 00:10:34.697 { 00:10:34.697 "name": "BaseBdev1", 00:10:34.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.697 "is_configured": false, 00:10:34.697 "data_offset": 0, 00:10:34.697 "data_size": 0 00:10:34.697 }, 00:10:34.697 { 00:10:34.698 "name": "BaseBdev2", 00:10:34.698 "uuid": "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd", 00:10:34.698 "is_configured": true, 00:10:34.698 "data_offset": 0, 00:10:34.698 "data_size": 65536 00:10:34.698 }, 00:10:34.698 { 00:10:34.698 "name": "BaseBdev3", 00:10:34.698 "uuid": "33eae3fc-dafc-4f24-83da-8d98cd338124", 00:10:34.698 "is_configured": true, 00:10:34.698 "data_offset": 0, 00:10:34.698 "data_size": 65536 00:10:34.698 } 00:10:34.698 ] 00:10:34.698 }' 00:10:34.698 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.698 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.268 [2024-11-15 10:39:05.670835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.268 "name": "Existed_Raid", 00:10:35.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.268 "strip_size_kb": 64, 00:10:35.268 "state": "configuring", 00:10:35.268 "raid_level": "raid0", 00:10:35.268 "superblock": false, 00:10:35.268 "num_base_bdevs": 3, 00:10:35.268 "num_base_bdevs_discovered": 1, 00:10:35.268 "num_base_bdevs_operational": 3, 00:10:35.268 "base_bdevs_list": [ 00:10:35.268 { 00:10:35.268 "name": "BaseBdev1", 00:10:35.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.268 "is_configured": false, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 0 00:10:35.268 }, 00:10:35.268 { 00:10:35.268 "name": null, 00:10:35.268 "uuid": "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd", 00:10:35.268 "is_configured": false, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 65536 00:10:35.268 }, 00:10:35.268 { 00:10:35.268 "name": "BaseBdev3", 00:10:35.268 "uuid": "33eae3fc-dafc-4f24-83da-8d98cd338124", 00:10:35.268 "is_configured": true, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 65536 00:10:35.268 } 00:10:35.268 ] 00:10:35.268 }' 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.268 10:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.835 [2024-11-15 10:39:06.264334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.835 BaseBdev1 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.835 [ 00:10:35.835 { 00:10:35.835 "name": "BaseBdev1", 00:10:35.835 "aliases": [ 00:10:35.835 "f0aa9ee1-1b96-4295-8741-abd2945a9a17" 00:10:35.835 ], 00:10:35.835 "product_name": "Malloc disk", 00:10:35.835 "block_size": 512, 00:10:35.835 "num_blocks": 65536, 00:10:35.835 "uuid": "f0aa9ee1-1b96-4295-8741-abd2945a9a17", 00:10:35.835 "assigned_rate_limits": { 00:10:35.835 "rw_ios_per_sec": 0, 00:10:35.835 "rw_mbytes_per_sec": 0, 00:10:35.835 "r_mbytes_per_sec": 0, 00:10:35.835 "w_mbytes_per_sec": 0 00:10:35.835 }, 00:10:35.835 "claimed": true, 00:10:35.835 "claim_type": "exclusive_write", 00:10:35.835 "zoned": false, 00:10:35.835 "supported_io_types": { 00:10:35.835 "read": true, 00:10:35.835 "write": true, 00:10:35.835 "unmap": true, 00:10:35.835 "flush": true, 00:10:35.835 "reset": true, 00:10:35.835 "nvme_admin": false, 00:10:35.835 "nvme_io": false, 00:10:35.835 "nvme_io_md": false, 00:10:35.835 "write_zeroes": true, 00:10:35.835 "zcopy": true, 00:10:35.835 "get_zone_info": false, 00:10:35.835 "zone_management": false, 00:10:35.835 "zone_append": false, 00:10:35.835 "compare": false, 00:10:35.835 "compare_and_write": false, 00:10:35.835 "abort": true, 00:10:35.835 "seek_hole": false, 00:10:35.835 "seek_data": false, 00:10:35.835 "copy": true, 00:10:35.835 "nvme_iov_md": false 00:10:35.835 }, 00:10:35.835 "memory_domains": [ 00:10:35.835 { 00:10:35.835 "dma_device_id": "system", 00:10:35.835 "dma_device_type": 1 00:10:35.835 }, 00:10:35.835 { 00:10:35.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.835 "dma_device_type": 2 00:10:35.835 } 00:10:35.835 ], 00:10:35.835 "driver_specific": {} 00:10:35.835 } 00:10:35.835 ] 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.835 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.836 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.836 "name": "Existed_Raid", 00:10:35.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.836 "strip_size_kb": 64, 00:10:35.836 "state": "configuring", 00:10:35.836 "raid_level": "raid0", 00:10:35.836 "superblock": false, 00:10:35.836 "num_base_bdevs": 3, 00:10:35.836 "num_base_bdevs_discovered": 2, 00:10:35.836 "num_base_bdevs_operational": 3, 00:10:35.836 "base_bdevs_list": [ 00:10:35.836 { 00:10:35.836 "name": "BaseBdev1", 00:10:35.836 "uuid": "f0aa9ee1-1b96-4295-8741-abd2945a9a17", 00:10:35.836 "is_configured": true, 00:10:35.836 "data_offset": 0, 00:10:35.836 "data_size": 65536 00:10:35.836 }, 00:10:35.836 { 00:10:35.836 "name": null, 00:10:35.836 "uuid": "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd", 00:10:35.836 "is_configured": false, 00:10:35.836 "data_offset": 0, 00:10:35.836 "data_size": 65536 00:10:35.836 }, 00:10:35.836 { 00:10:35.836 "name": "BaseBdev3", 00:10:35.836 "uuid": "33eae3fc-dafc-4f24-83da-8d98cd338124", 00:10:35.836 "is_configured": true, 00:10:35.836 "data_offset": 0, 00:10:35.836 "data_size": 65536 00:10:35.836 } 00:10:35.836 ] 00:10:35.836 }' 00:10:35.836 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.836 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.402 [2024-11-15 10:39:06.836551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.402 "name": "Existed_Raid", 00:10:36.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.402 "strip_size_kb": 64, 00:10:36.402 "state": "configuring", 00:10:36.402 "raid_level": "raid0", 00:10:36.402 "superblock": false, 00:10:36.402 "num_base_bdevs": 3, 00:10:36.402 "num_base_bdevs_discovered": 1, 00:10:36.402 "num_base_bdevs_operational": 3, 00:10:36.402 "base_bdevs_list": [ 00:10:36.402 { 00:10:36.402 "name": "BaseBdev1", 00:10:36.402 "uuid": "f0aa9ee1-1b96-4295-8741-abd2945a9a17", 00:10:36.402 "is_configured": true, 00:10:36.402 "data_offset": 0, 00:10:36.402 "data_size": 65536 00:10:36.402 }, 00:10:36.402 { 00:10:36.402 "name": null, 00:10:36.402 "uuid": "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd", 00:10:36.402 "is_configured": false, 00:10:36.402 "data_offset": 0, 00:10:36.402 "data_size": 65536 00:10:36.402 }, 00:10:36.402 { 00:10:36.402 "name": null, 00:10:36.402 "uuid": "33eae3fc-dafc-4f24-83da-8d98cd338124", 00:10:36.402 "is_configured": false, 00:10:36.402 "data_offset": 0, 00:10:36.402 "data_size": 65536 00:10:36.402 } 00:10:36.402 ] 00:10:36.402 }' 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.402 10:39:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.970 [2024-11-15 10:39:07.404769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.970 "name": "Existed_Raid", 00:10:36.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.970 "strip_size_kb": 64, 00:10:36.970 "state": "configuring", 00:10:36.970 "raid_level": "raid0", 00:10:36.970 "superblock": false, 00:10:36.970 "num_base_bdevs": 3, 00:10:36.970 "num_base_bdevs_discovered": 2, 00:10:36.970 "num_base_bdevs_operational": 3, 00:10:36.970 "base_bdevs_list": [ 00:10:36.970 { 00:10:36.970 "name": "BaseBdev1", 00:10:36.970 "uuid": "f0aa9ee1-1b96-4295-8741-abd2945a9a17", 00:10:36.970 "is_configured": true, 00:10:36.970 "data_offset": 0, 00:10:36.970 "data_size": 65536 00:10:36.970 }, 00:10:36.970 { 00:10:36.970 "name": null, 00:10:36.970 "uuid": "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd", 00:10:36.970 "is_configured": false, 00:10:36.970 "data_offset": 0, 00:10:36.970 "data_size": 65536 00:10:36.970 }, 00:10:36.970 { 00:10:36.970 "name": "BaseBdev3", 00:10:36.970 "uuid": "33eae3fc-dafc-4f24-83da-8d98cd338124", 00:10:36.970 "is_configured": true, 00:10:36.970 "data_offset": 0, 00:10:36.970 "data_size": 65536 00:10:36.970 } 00:10:36.970 ] 00:10:36.970 }' 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.970 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.537 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.537 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.537 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.537 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.537 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.537 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:37.537 10:39:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.537 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.537 10:39:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.537 [2024-11-15 10:39:07.997000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.537 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.796 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.796 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.796 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.796 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.796 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.796 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.796 "name": "Existed_Raid", 00:10:37.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.796 "strip_size_kb": 64, 00:10:37.796 "state": "configuring", 00:10:37.796 "raid_level": "raid0", 00:10:37.796 "superblock": false, 00:10:37.796 "num_base_bdevs": 3, 00:10:37.796 "num_base_bdevs_discovered": 1, 00:10:37.796 "num_base_bdevs_operational": 3, 00:10:37.796 "base_bdevs_list": [ 00:10:37.796 { 00:10:37.796 "name": null, 00:10:37.796 "uuid": "f0aa9ee1-1b96-4295-8741-abd2945a9a17", 00:10:37.796 "is_configured": false, 00:10:37.796 "data_offset": 0, 00:10:37.796 "data_size": 65536 00:10:37.796 }, 00:10:37.796 { 00:10:37.796 "name": null, 00:10:37.796 "uuid": "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd", 00:10:37.796 "is_configured": false, 00:10:37.796 "data_offset": 0, 00:10:37.796 "data_size": 65536 00:10:37.796 }, 00:10:37.796 { 00:10:37.796 "name": "BaseBdev3", 00:10:37.796 "uuid": "33eae3fc-dafc-4f24-83da-8d98cd338124", 00:10:37.796 "is_configured": true, 00:10:37.796 "data_offset": 0, 00:10:37.796 "data_size": 65536 00:10:37.796 } 00:10:37.796 ] 00:10:37.796 }' 00:10:37.796 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.796 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.055 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.055 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.055 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.055 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.313 [2024-11-15 10:39:08.684329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.313 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.314 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.314 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.314 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.314 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.314 "name": "Existed_Raid", 00:10:38.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.314 "strip_size_kb": 64, 00:10:38.314 "state": "configuring", 00:10:38.314 "raid_level": "raid0", 00:10:38.314 "superblock": false, 00:10:38.314 "num_base_bdevs": 3, 00:10:38.314 "num_base_bdevs_discovered": 2, 00:10:38.314 "num_base_bdevs_operational": 3, 00:10:38.314 "base_bdevs_list": [ 00:10:38.314 { 00:10:38.314 "name": null, 00:10:38.314 "uuid": "f0aa9ee1-1b96-4295-8741-abd2945a9a17", 00:10:38.314 "is_configured": false, 00:10:38.314 "data_offset": 0, 00:10:38.314 "data_size": 65536 00:10:38.314 }, 00:10:38.314 { 00:10:38.314 "name": "BaseBdev2", 00:10:38.314 "uuid": "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd", 00:10:38.314 "is_configured": true, 00:10:38.314 "data_offset": 0, 00:10:38.314 "data_size": 65536 00:10:38.314 }, 00:10:38.314 { 00:10:38.314 "name": "BaseBdev3", 00:10:38.314 "uuid": "33eae3fc-dafc-4f24-83da-8d98cd338124", 00:10:38.314 "is_configured": true, 00:10:38.314 "data_offset": 0, 00:10:38.314 "data_size": 65536 00:10:38.314 } 00:10:38.314 ] 00:10:38.314 }' 00:10:38.314 10:39:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.314 10:39:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f0aa9ee1-1b96-4295-8741-abd2945a9a17 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.881 [2024-11-15 10:39:09.345745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:38.881 NewBaseBdev 00:10:38.881 [2024-11-15 10:39:09.345983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:38.881 [2024-11-15 10:39:09.346014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:38.881 [2024-11-15 10:39:09.346327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:38.881 [2024-11-15 10:39:09.346534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:38.881 [2024-11-15 10:39:09.346552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:38.881 [2024-11-15 10:39:09.346850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.881 [ 00:10:38.881 { 00:10:38.881 "name": "NewBaseBdev", 00:10:38.881 "aliases": [ 00:10:38.881 "f0aa9ee1-1b96-4295-8741-abd2945a9a17" 00:10:38.881 ], 00:10:38.881 "product_name": "Malloc disk", 00:10:38.881 "block_size": 512, 00:10:38.881 "num_blocks": 65536, 00:10:38.881 "uuid": "f0aa9ee1-1b96-4295-8741-abd2945a9a17", 00:10:38.881 "assigned_rate_limits": { 00:10:38.881 "rw_ios_per_sec": 0, 00:10:38.881 "rw_mbytes_per_sec": 0, 00:10:38.881 "r_mbytes_per_sec": 0, 00:10:38.881 "w_mbytes_per_sec": 0 00:10:38.881 }, 00:10:38.881 "claimed": true, 00:10:38.881 "claim_type": "exclusive_write", 00:10:38.881 "zoned": false, 00:10:38.881 "supported_io_types": { 00:10:38.881 "read": true, 00:10:38.881 "write": true, 00:10:38.881 "unmap": true, 00:10:38.881 "flush": true, 00:10:38.881 "reset": true, 00:10:38.881 "nvme_admin": false, 00:10:38.881 "nvme_io": false, 00:10:38.881 "nvme_io_md": false, 00:10:38.881 "write_zeroes": true, 00:10:38.881 "zcopy": true, 00:10:38.881 "get_zone_info": false, 00:10:38.881 "zone_management": false, 00:10:38.881 "zone_append": false, 00:10:38.881 "compare": false, 00:10:38.881 "compare_and_write": false, 00:10:38.881 "abort": true, 00:10:38.881 "seek_hole": false, 00:10:38.881 "seek_data": false, 00:10:38.881 "copy": true, 00:10:38.881 "nvme_iov_md": false 00:10:38.881 }, 00:10:38.881 "memory_domains": [ 00:10:38.881 { 00:10:38.881 "dma_device_id": "system", 00:10:38.881 "dma_device_type": 1 00:10:38.881 }, 00:10:38.881 { 00:10:38.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.881 "dma_device_type": 2 00:10:38.881 } 00:10:38.881 ], 00:10:38.881 "driver_specific": {} 00:10:38.881 } 00:10:38.881 ] 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.881 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.881 "name": "Existed_Raid", 00:10:38.881 "uuid": "95f32393-b3d7-4861-8667-0b4b1af603bd", 00:10:38.881 "strip_size_kb": 64, 00:10:38.881 "state": "online", 00:10:38.881 "raid_level": "raid0", 00:10:38.881 "superblock": false, 00:10:38.881 "num_base_bdevs": 3, 00:10:38.881 "num_base_bdevs_discovered": 3, 00:10:38.882 "num_base_bdevs_operational": 3, 00:10:38.882 "base_bdevs_list": [ 00:10:38.882 { 00:10:38.882 "name": "NewBaseBdev", 00:10:38.882 "uuid": "f0aa9ee1-1b96-4295-8741-abd2945a9a17", 00:10:38.882 "is_configured": true, 00:10:38.882 "data_offset": 0, 00:10:38.882 "data_size": 65536 00:10:38.882 }, 00:10:38.882 { 00:10:38.882 "name": "BaseBdev2", 00:10:38.882 "uuid": "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd", 00:10:38.882 "is_configured": true, 00:10:38.882 "data_offset": 0, 00:10:38.882 "data_size": 65536 00:10:38.882 }, 00:10:38.882 { 00:10:38.882 "name": "BaseBdev3", 00:10:38.882 "uuid": "33eae3fc-dafc-4f24-83da-8d98cd338124", 00:10:38.882 "is_configured": true, 00:10:38.882 "data_offset": 0, 00:10:38.882 "data_size": 65536 00:10:38.882 } 00:10:38.882 ] 00:10:38.882 }' 00:10:38.882 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.882 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.449 [2024-11-15 10:39:09.914310] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.449 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.449 "name": "Existed_Raid", 00:10:39.449 "aliases": [ 00:10:39.449 "95f32393-b3d7-4861-8667-0b4b1af603bd" 00:10:39.449 ], 00:10:39.449 "product_name": "Raid Volume", 00:10:39.449 "block_size": 512, 00:10:39.449 "num_blocks": 196608, 00:10:39.449 "uuid": "95f32393-b3d7-4861-8667-0b4b1af603bd", 00:10:39.449 "assigned_rate_limits": { 00:10:39.449 "rw_ios_per_sec": 0, 00:10:39.449 "rw_mbytes_per_sec": 0, 00:10:39.449 "r_mbytes_per_sec": 0, 00:10:39.449 "w_mbytes_per_sec": 0 00:10:39.449 }, 00:10:39.449 "claimed": false, 00:10:39.449 "zoned": false, 00:10:39.449 "supported_io_types": { 00:10:39.449 "read": true, 00:10:39.449 "write": true, 00:10:39.449 "unmap": true, 00:10:39.449 "flush": true, 00:10:39.449 "reset": true, 00:10:39.449 "nvme_admin": false, 00:10:39.449 "nvme_io": false, 00:10:39.449 "nvme_io_md": false, 00:10:39.449 "write_zeroes": true, 00:10:39.449 "zcopy": false, 00:10:39.449 "get_zone_info": false, 00:10:39.449 "zone_management": false, 00:10:39.449 "zone_append": false, 00:10:39.449 "compare": false, 00:10:39.449 "compare_and_write": false, 00:10:39.449 "abort": false, 00:10:39.449 "seek_hole": false, 00:10:39.449 "seek_data": false, 00:10:39.449 "copy": false, 00:10:39.449 "nvme_iov_md": false 00:10:39.449 }, 00:10:39.449 "memory_domains": [ 00:10:39.449 { 00:10:39.449 "dma_device_id": "system", 00:10:39.449 "dma_device_type": 1 00:10:39.449 }, 00:10:39.449 { 00:10:39.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.449 "dma_device_type": 2 00:10:39.449 }, 00:10:39.449 { 00:10:39.449 "dma_device_id": "system", 00:10:39.449 "dma_device_type": 1 00:10:39.449 }, 00:10:39.449 { 00:10:39.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.449 "dma_device_type": 2 00:10:39.449 }, 00:10:39.449 { 00:10:39.449 "dma_device_id": "system", 00:10:39.449 "dma_device_type": 1 00:10:39.449 }, 00:10:39.449 { 00:10:39.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.449 "dma_device_type": 2 00:10:39.449 } 00:10:39.449 ], 00:10:39.449 "driver_specific": { 00:10:39.450 "raid": { 00:10:39.450 "uuid": "95f32393-b3d7-4861-8667-0b4b1af603bd", 00:10:39.450 "strip_size_kb": 64, 00:10:39.450 "state": "online", 00:10:39.450 "raid_level": "raid0", 00:10:39.450 "superblock": false, 00:10:39.450 "num_base_bdevs": 3, 00:10:39.450 "num_base_bdevs_discovered": 3, 00:10:39.450 "num_base_bdevs_operational": 3, 00:10:39.450 "base_bdevs_list": [ 00:10:39.450 { 00:10:39.450 "name": "NewBaseBdev", 00:10:39.450 "uuid": "f0aa9ee1-1b96-4295-8741-abd2945a9a17", 00:10:39.450 "is_configured": true, 00:10:39.450 "data_offset": 0, 00:10:39.450 "data_size": 65536 00:10:39.450 }, 00:10:39.450 { 00:10:39.450 "name": "BaseBdev2", 00:10:39.450 "uuid": "89bb3ea5-a307-4b8a-8b99-8ca80956f8dd", 00:10:39.450 "is_configured": true, 00:10:39.450 "data_offset": 0, 00:10:39.450 "data_size": 65536 00:10:39.450 }, 00:10:39.450 { 00:10:39.450 "name": "BaseBdev3", 00:10:39.450 "uuid": "33eae3fc-dafc-4f24-83da-8d98cd338124", 00:10:39.450 "is_configured": true, 00:10:39.450 "data_offset": 0, 00:10:39.450 "data_size": 65536 00:10:39.450 } 00:10:39.450 ] 00:10:39.450 } 00:10:39.450 } 00:10:39.450 }' 00:10:39.450 10:39:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.450 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:39.450 BaseBdev2 00:10:39.450 BaseBdev3' 00:10:39.708 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.709 [2024-11-15 10:39:10.214017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.709 [2024-11-15 10:39:10.214167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.709 [2024-11-15 10:39:10.214429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.709 [2024-11-15 10:39:10.214623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.709 [2024-11-15 10:39:10.214658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64020 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 64020 ']' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 64020 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64020 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:39.709 killing process with pid 64020 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64020' 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 64020 00:10:39.709 [2024-11-15 10:39:10.253529] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.709 10:39:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 64020 00:10:39.971 [2024-11-15 10:39:10.505876] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:41.354 00:10:41.354 real 0m11.670s 00:10:41.354 user 0m19.573s 00:10:41.354 sys 0m1.440s 00:10:41.354 ************************************ 00:10:41.354 END TEST raid_state_function_test 00:10:41.354 ************************************ 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.354 10:39:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:41.354 10:39:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:41.354 10:39:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:41.354 10:39:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.354 ************************************ 00:10:41.354 START TEST raid_state_function_test_sb 00:10:41.354 ************************************ 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64654 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64654' 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:41.354 Process raid pid: 64654 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64654 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64654 ']' 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:41.354 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.354 [2024-11-15 10:39:11.646072] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:10:41.354 [2024-11-15 10:39:11.646238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.354 [2024-11-15 10:39:11.833035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.613 [2024-11-15 10:39:11.958713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.613 [2024-11-15 10:39:12.162420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.613 [2024-11-15 10:39:12.162471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.181 [2024-11-15 10:39:12.606310] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.181 [2024-11-15 10:39:12.606390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.181 [2024-11-15 10:39:12.606408] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.181 [2024-11-15 10:39:12.606425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.181 [2024-11-15 10:39:12.606436] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.181 [2024-11-15 10:39:12.606450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.181 "name": "Existed_Raid", 00:10:42.181 "uuid": "6294c346-947a-4018-b62d-847120253525", 00:10:42.181 "strip_size_kb": 64, 00:10:42.181 "state": "configuring", 00:10:42.181 "raid_level": "raid0", 00:10:42.181 "superblock": true, 00:10:42.181 "num_base_bdevs": 3, 00:10:42.181 "num_base_bdevs_discovered": 0, 00:10:42.181 "num_base_bdevs_operational": 3, 00:10:42.181 "base_bdevs_list": [ 00:10:42.181 { 00:10:42.181 "name": "BaseBdev1", 00:10:42.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.181 "is_configured": false, 00:10:42.181 "data_offset": 0, 00:10:42.181 "data_size": 0 00:10:42.181 }, 00:10:42.181 { 00:10:42.181 "name": "BaseBdev2", 00:10:42.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.181 "is_configured": false, 00:10:42.181 "data_offset": 0, 00:10:42.181 "data_size": 0 00:10:42.181 }, 00:10:42.181 { 00:10:42.181 "name": "BaseBdev3", 00:10:42.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.181 "is_configured": false, 00:10:42.181 "data_offset": 0, 00:10:42.181 "data_size": 0 00:10:42.181 } 00:10:42.181 ] 00:10:42.181 }' 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.181 10:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.749 [2024-11-15 10:39:13.142420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.749 [2024-11-15 10:39:13.142606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.749 [2024-11-15 10:39:13.154424] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.749 [2024-11-15 10:39:13.154610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.749 [2024-11-15 10:39:13.154735] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.749 [2024-11-15 10:39:13.154808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.749 [2024-11-15 10:39:13.154918] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.749 [2024-11-15 10:39:13.154975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.749 [2024-11-15 10:39:13.199626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.749 BaseBdev1 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.749 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.749 [ 00:10:42.749 { 00:10:42.749 "name": "BaseBdev1", 00:10:42.749 "aliases": [ 00:10:42.749 "3c29cdce-992a-4742-a7d3-46b577317c41" 00:10:42.749 ], 00:10:42.749 "product_name": "Malloc disk", 00:10:42.749 "block_size": 512, 00:10:42.749 "num_blocks": 65536, 00:10:42.749 "uuid": "3c29cdce-992a-4742-a7d3-46b577317c41", 00:10:42.749 "assigned_rate_limits": { 00:10:42.749 "rw_ios_per_sec": 0, 00:10:42.749 "rw_mbytes_per_sec": 0, 00:10:42.749 "r_mbytes_per_sec": 0, 00:10:42.749 "w_mbytes_per_sec": 0 00:10:42.749 }, 00:10:42.749 "claimed": true, 00:10:42.749 "claim_type": "exclusive_write", 00:10:42.749 "zoned": false, 00:10:42.749 "supported_io_types": { 00:10:42.749 "read": true, 00:10:42.749 "write": true, 00:10:42.749 "unmap": true, 00:10:42.749 "flush": true, 00:10:42.749 "reset": true, 00:10:42.749 "nvme_admin": false, 00:10:42.749 "nvme_io": false, 00:10:42.749 "nvme_io_md": false, 00:10:42.749 "write_zeroes": true, 00:10:42.749 "zcopy": true, 00:10:42.749 "get_zone_info": false, 00:10:42.749 "zone_management": false, 00:10:42.749 "zone_append": false, 00:10:42.749 "compare": false, 00:10:42.749 "compare_and_write": false, 00:10:42.750 "abort": true, 00:10:42.750 "seek_hole": false, 00:10:42.750 "seek_data": false, 00:10:42.750 "copy": true, 00:10:42.750 "nvme_iov_md": false 00:10:42.750 }, 00:10:42.750 "memory_domains": [ 00:10:42.750 { 00:10:42.750 "dma_device_id": "system", 00:10:42.750 "dma_device_type": 1 00:10:42.750 }, 00:10:42.750 { 00:10:42.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.750 "dma_device_type": 2 00:10:42.750 } 00:10:42.750 ], 00:10:42.750 "driver_specific": {} 00:10:42.750 } 00:10:42.750 ] 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.750 "name": "Existed_Raid", 00:10:42.750 "uuid": "06ddfd82-8c90-414b-8a80-20542258ab80", 00:10:42.750 "strip_size_kb": 64, 00:10:42.750 "state": "configuring", 00:10:42.750 "raid_level": "raid0", 00:10:42.750 "superblock": true, 00:10:42.750 "num_base_bdevs": 3, 00:10:42.750 "num_base_bdevs_discovered": 1, 00:10:42.750 "num_base_bdevs_operational": 3, 00:10:42.750 "base_bdevs_list": [ 00:10:42.750 { 00:10:42.750 "name": "BaseBdev1", 00:10:42.750 "uuid": "3c29cdce-992a-4742-a7d3-46b577317c41", 00:10:42.750 "is_configured": true, 00:10:42.750 "data_offset": 2048, 00:10:42.750 "data_size": 63488 00:10:42.750 }, 00:10:42.750 { 00:10:42.750 "name": "BaseBdev2", 00:10:42.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.750 "is_configured": false, 00:10:42.750 "data_offset": 0, 00:10:42.750 "data_size": 0 00:10:42.750 }, 00:10:42.750 { 00:10:42.750 "name": "BaseBdev3", 00:10:42.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.750 "is_configured": false, 00:10:42.750 "data_offset": 0, 00:10:42.750 "data_size": 0 00:10:42.750 } 00:10:42.750 ] 00:10:42.750 }' 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.750 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.317 [2024-11-15 10:39:13.776098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.317 [2024-11-15 10:39:13.776299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.317 [2024-11-15 10:39:13.784156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.317 [2024-11-15 10:39:13.786541] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.317 [2024-11-15 10:39:13.786713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.317 [2024-11-15 10:39:13.786832] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.317 [2024-11-15 10:39:13.786892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.317 "name": "Existed_Raid", 00:10:43.317 "uuid": "9cd6b3d7-e386-4122-9ad8-27815094328d", 00:10:43.317 "strip_size_kb": 64, 00:10:43.317 "state": "configuring", 00:10:43.317 "raid_level": "raid0", 00:10:43.317 "superblock": true, 00:10:43.317 "num_base_bdevs": 3, 00:10:43.317 "num_base_bdevs_discovered": 1, 00:10:43.317 "num_base_bdevs_operational": 3, 00:10:43.317 "base_bdevs_list": [ 00:10:43.317 { 00:10:43.317 "name": "BaseBdev1", 00:10:43.317 "uuid": "3c29cdce-992a-4742-a7d3-46b577317c41", 00:10:43.317 "is_configured": true, 00:10:43.317 "data_offset": 2048, 00:10:43.317 "data_size": 63488 00:10:43.317 }, 00:10:43.317 { 00:10:43.317 "name": "BaseBdev2", 00:10:43.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.317 "is_configured": false, 00:10:43.317 "data_offset": 0, 00:10:43.317 "data_size": 0 00:10:43.317 }, 00:10:43.317 { 00:10:43.317 "name": "BaseBdev3", 00:10:43.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.317 "is_configured": false, 00:10:43.317 "data_offset": 0, 00:10:43.317 "data_size": 0 00:10:43.317 } 00:10:43.317 ] 00:10:43.317 }' 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.317 10:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.884 [2024-11-15 10:39:14.318070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.884 BaseBdev2 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.884 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.884 [ 00:10:43.884 { 00:10:43.884 "name": "BaseBdev2", 00:10:43.884 "aliases": [ 00:10:43.884 "85b77862-6631-463d-8ed4-26ff8028992b" 00:10:43.884 ], 00:10:43.884 "product_name": "Malloc disk", 00:10:43.884 "block_size": 512, 00:10:43.884 "num_blocks": 65536, 00:10:43.884 "uuid": "85b77862-6631-463d-8ed4-26ff8028992b", 00:10:43.884 "assigned_rate_limits": { 00:10:43.884 "rw_ios_per_sec": 0, 00:10:43.884 "rw_mbytes_per_sec": 0, 00:10:43.884 "r_mbytes_per_sec": 0, 00:10:43.884 "w_mbytes_per_sec": 0 00:10:43.884 }, 00:10:43.884 "claimed": true, 00:10:43.884 "claim_type": "exclusive_write", 00:10:43.884 "zoned": false, 00:10:43.884 "supported_io_types": { 00:10:43.884 "read": true, 00:10:43.884 "write": true, 00:10:43.884 "unmap": true, 00:10:43.884 "flush": true, 00:10:43.884 "reset": true, 00:10:43.884 "nvme_admin": false, 00:10:43.884 "nvme_io": false, 00:10:43.884 "nvme_io_md": false, 00:10:43.884 "write_zeroes": true, 00:10:43.884 "zcopy": true, 00:10:43.884 "get_zone_info": false, 00:10:43.884 "zone_management": false, 00:10:43.884 "zone_append": false, 00:10:43.884 "compare": false, 00:10:43.884 "compare_and_write": false, 00:10:43.884 "abort": true, 00:10:43.884 "seek_hole": false, 00:10:43.884 "seek_data": false, 00:10:43.884 "copy": true, 00:10:43.884 "nvme_iov_md": false 00:10:43.884 }, 00:10:43.884 "memory_domains": [ 00:10:43.884 { 00:10:43.884 "dma_device_id": "system", 00:10:43.885 "dma_device_type": 1 00:10:43.885 }, 00:10:43.885 { 00:10:43.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.885 "dma_device_type": 2 00:10:43.885 } 00:10:43.885 ], 00:10:43.885 "driver_specific": {} 00:10:43.885 } 00:10:43.885 ] 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.885 "name": "Existed_Raid", 00:10:43.885 "uuid": "9cd6b3d7-e386-4122-9ad8-27815094328d", 00:10:43.885 "strip_size_kb": 64, 00:10:43.885 "state": "configuring", 00:10:43.885 "raid_level": "raid0", 00:10:43.885 "superblock": true, 00:10:43.885 "num_base_bdevs": 3, 00:10:43.885 "num_base_bdevs_discovered": 2, 00:10:43.885 "num_base_bdevs_operational": 3, 00:10:43.885 "base_bdevs_list": [ 00:10:43.885 { 00:10:43.885 "name": "BaseBdev1", 00:10:43.885 "uuid": "3c29cdce-992a-4742-a7d3-46b577317c41", 00:10:43.885 "is_configured": true, 00:10:43.885 "data_offset": 2048, 00:10:43.885 "data_size": 63488 00:10:43.885 }, 00:10:43.885 { 00:10:43.885 "name": "BaseBdev2", 00:10:43.885 "uuid": "85b77862-6631-463d-8ed4-26ff8028992b", 00:10:43.885 "is_configured": true, 00:10:43.885 "data_offset": 2048, 00:10:43.885 "data_size": 63488 00:10:43.885 }, 00:10:43.885 { 00:10:43.885 "name": "BaseBdev3", 00:10:43.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.885 "is_configured": false, 00:10:43.885 "data_offset": 0, 00:10:43.885 "data_size": 0 00:10:43.885 } 00:10:43.885 ] 00:10:43.885 }' 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.885 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.452 [2024-11-15 10:39:14.857522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.452 [2024-11-15 10:39:14.857808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:44.452 [2024-11-15 10:39:14.857839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:44.452 BaseBdev3 00:10:44.452 [2024-11-15 10:39:14.858157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:44.452 [2024-11-15 10:39:14.858379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:44.452 [2024-11-15 10:39:14.858404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:44.452 [2024-11-15 10:39:14.858593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.452 [ 00:10:44.452 { 00:10:44.452 "name": "BaseBdev3", 00:10:44.452 "aliases": [ 00:10:44.452 "ab34d23c-6208-457e-8c6c-9b5aa4bc2f6f" 00:10:44.452 ], 00:10:44.452 "product_name": "Malloc disk", 00:10:44.452 "block_size": 512, 00:10:44.452 "num_blocks": 65536, 00:10:44.452 "uuid": "ab34d23c-6208-457e-8c6c-9b5aa4bc2f6f", 00:10:44.452 "assigned_rate_limits": { 00:10:44.452 "rw_ios_per_sec": 0, 00:10:44.452 "rw_mbytes_per_sec": 0, 00:10:44.452 "r_mbytes_per_sec": 0, 00:10:44.452 "w_mbytes_per_sec": 0 00:10:44.452 }, 00:10:44.452 "claimed": true, 00:10:44.452 "claim_type": "exclusive_write", 00:10:44.452 "zoned": false, 00:10:44.452 "supported_io_types": { 00:10:44.452 "read": true, 00:10:44.452 "write": true, 00:10:44.452 "unmap": true, 00:10:44.452 "flush": true, 00:10:44.452 "reset": true, 00:10:44.452 "nvme_admin": false, 00:10:44.452 "nvme_io": false, 00:10:44.452 "nvme_io_md": false, 00:10:44.452 "write_zeroes": true, 00:10:44.452 "zcopy": true, 00:10:44.452 "get_zone_info": false, 00:10:44.452 "zone_management": false, 00:10:44.452 "zone_append": false, 00:10:44.452 "compare": false, 00:10:44.452 "compare_and_write": false, 00:10:44.452 "abort": true, 00:10:44.452 "seek_hole": false, 00:10:44.452 "seek_data": false, 00:10:44.452 "copy": true, 00:10:44.452 "nvme_iov_md": false 00:10:44.452 }, 00:10:44.452 "memory_domains": [ 00:10:44.452 { 00:10:44.452 "dma_device_id": "system", 00:10:44.452 "dma_device_type": 1 00:10:44.452 }, 00:10:44.452 { 00:10:44.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.452 "dma_device_type": 2 00:10:44.452 } 00:10:44.452 ], 00:10:44.452 "driver_specific": {} 00:10:44.452 } 00:10:44.452 ] 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.452 "name": "Existed_Raid", 00:10:44.452 "uuid": "9cd6b3d7-e386-4122-9ad8-27815094328d", 00:10:44.452 "strip_size_kb": 64, 00:10:44.452 "state": "online", 00:10:44.452 "raid_level": "raid0", 00:10:44.452 "superblock": true, 00:10:44.452 "num_base_bdevs": 3, 00:10:44.452 "num_base_bdevs_discovered": 3, 00:10:44.452 "num_base_bdevs_operational": 3, 00:10:44.452 "base_bdevs_list": [ 00:10:44.452 { 00:10:44.452 "name": "BaseBdev1", 00:10:44.452 "uuid": "3c29cdce-992a-4742-a7d3-46b577317c41", 00:10:44.452 "is_configured": true, 00:10:44.452 "data_offset": 2048, 00:10:44.452 "data_size": 63488 00:10:44.452 }, 00:10:44.452 { 00:10:44.452 "name": "BaseBdev2", 00:10:44.452 "uuid": "85b77862-6631-463d-8ed4-26ff8028992b", 00:10:44.452 "is_configured": true, 00:10:44.452 "data_offset": 2048, 00:10:44.452 "data_size": 63488 00:10:44.452 }, 00:10:44.452 { 00:10:44.452 "name": "BaseBdev3", 00:10:44.452 "uuid": "ab34d23c-6208-457e-8c6c-9b5aa4bc2f6f", 00:10:44.452 "is_configured": true, 00:10:44.452 "data_offset": 2048, 00:10:44.452 "data_size": 63488 00:10:44.452 } 00:10:44.452 ] 00:10:44.452 }' 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.452 10:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.019 [2024-11-15 10:39:15.437971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.019 "name": "Existed_Raid", 00:10:45.019 "aliases": [ 00:10:45.019 "9cd6b3d7-e386-4122-9ad8-27815094328d" 00:10:45.019 ], 00:10:45.019 "product_name": "Raid Volume", 00:10:45.019 "block_size": 512, 00:10:45.019 "num_blocks": 190464, 00:10:45.019 "uuid": "9cd6b3d7-e386-4122-9ad8-27815094328d", 00:10:45.019 "assigned_rate_limits": { 00:10:45.019 "rw_ios_per_sec": 0, 00:10:45.019 "rw_mbytes_per_sec": 0, 00:10:45.019 "r_mbytes_per_sec": 0, 00:10:45.019 "w_mbytes_per_sec": 0 00:10:45.019 }, 00:10:45.019 "claimed": false, 00:10:45.019 "zoned": false, 00:10:45.019 "supported_io_types": { 00:10:45.019 "read": true, 00:10:45.019 "write": true, 00:10:45.019 "unmap": true, 00:10:45.019 "flush": true, 00:10:45.019 "reset": true, 00:10:45.019 "nvme_admin": false, 00:10:45.019 "nvme_io": false, 00:10:45.019 "nvme_io_md": false, 00:10:45.019 "write_zeroes": true, 00:10:45.019 "zcopy": false, 00:10:45.019 "get_zone_info": false, 00:10:45.019 "zone_management": false, 00:10:45.019 "zone_append": false, 00:10:45.019 "compare": false, 00:10:45.019 "compare_and_write": false, 00:10:45.019 "abort": false, 00:10:45.019 "seek_hole": false, 00:10:45.019 "seek_data": false, 00:10:45.019 "copy": false, 00:10:45.019 "nvme_iov_md": false 00:10:45.019 }, 00:10:45.019 "memory_domains": [ 00:10:45.019 { 00:10:45.019 "dma_device_id": "system", 00:10:45.019 "dma_device_type": 1 00:10:45.019 }, 00:10:45.019 { 00:10:45.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.019 "dma_device_type": 2 00:10:45.019 }, 00:10:45.019 { 00:10:45.019 "dma_device_id": "system", 00:10:45.019 "dma_device_type": 1 00:10:45.019 }, 00:10:45.019 { 00:10:45.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.019 "dma_device_type": 2 00:10:45.019 }, 00:10:45.019 { 00:10:45.019 "dma_device_id": "system", 00:10:45.019 "dma_device_type": 1 00:10:45.019 }, 00:10:45.019 { 00:10:45.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.019 "dma_device_type": 2 00:10:45.019 } 00:10:45.019 ], 00:10:45.019 "driver_specific": { 00:10:45.019 "raid": { 00:10:45.019 "uuid": "9cd6b3d7-e386-4122-9ad8-27815094328d", 00:10:45.019 "strip_size_kb": 64, 00:10:45.019 "state": "online", 00:10:45.019 "raid_level": "raid0", 00:10:45.019 "superblock": true, 00:10:45.019 "num_base_bdevs": 3, 00:10:45.019 "num_base_bdevs_discovered": 3, 00:10:45.019 "num_base_bdevs_operational": 3, 00:10:45.019 "base_bdevs_list": [ 00:10:45.019 { 00:10:45.019 "name": "BaseBdev1", 00:10:45.019 "uuid": "3c29cdce-992a-4742-a7d3-46b577317c41", 00:10:45.019 "is_configured": true, 00:10:45.019 "data_offset": 2048, 00:10:45.019 "data_size": 63488 00:10:45.019 }, 00:10:45.019 { 00:10:45.019 "name": "BaseBdev2", 00:10:45.019 "uuid": "85b77862-6631-463d-8ed4-26ff8028992b", 00:10:45.019 "is_configured": true, 00:10:45.019 "data_offset": 2048, 00:10:45.019 "data_size": 63488 00:10:45.019 }, 00:10:45.019 { 00:10:45.019 "name": "BaseBdev3", 00:10:45.019 "uuid": "ab34d23c-6208-457e-8c6c-9b5aa4bc2f6f", 00:10:45.019 "is_configured": true, 00:10:45.019 "data_offset": 2048, 00:10:45.019 "data_size": 63488 00:10:45.019 } 00:10:45.019 ] 00:10:45.019 } 00:10:45.019 } 00:10:45.019 }' 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:45.019 BaseBdev2 00:10:45.019 BaseBdev3' 00:10:45.019 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.020 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.020 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.278 [2024-11-15 10:39:15.745704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.278 [2024-11-15 10:39:15.745739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.278 [2024-11-15 10:39:15.745806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.278 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.538 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.538 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.538 "name": "Existed_Raid", 00:10:45.538 "uuid": "9cd6b3d7-e386-4122-9ad8-27815094328d", 00:10:45.538 "strip_size_kb": 64, 00:10:45.538 "state": "offline", 00:10:45.538 "raid_level": "raid0", 00:10:45.538 "superblock": true, 00:10:45.538 "num_base_bdevs": 3, 00:10:45.538 "num_base_bdevs_discovered": 2, 00:10:45.538 "num_base_bdevs_operational": 2, 00:10:45.538 "base_bdevs_list": [ 00:10:45.538 { 00:10:45.538 "name": null, 00:10:45.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.538 "is_configured": false, 00:10:45.538 "data_offset": 0, 00:10:45.538 "data_size": 63488 00:10:45.538 }, 00:10:45.538 { 00:10:45.538 "name": "BaseBdev2", 00:10:45.538 "uuid": "85b77862-6631-463d-8ed4-26ff8028992b", 00:10:45.538 "is_configured": true, 00:10:45.538 "data_offset": 2048, 00:10:45.538 "data_size": 63488 00:10:45.538 }, 00:10:45.538 { 00:10:45.538 "name": "BaseBdev3", 00:10:45.538 "uuid": "ab34d23c-6208-457e-8c6c-9b5aa4bc2f6f", 00:10:45.538 "is_configured": true, 00:10:45.538 "data_offset": 2048, 00:10:45.538 "data_size": 63488 00:10:45.538 } 00:10:45.538 ] 00:10:45.538 }' 00:10:45.538 10:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.538 10:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.105 [2024-11-15 10:39:16.433185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.105 [2024-11-15 10:39:16.576468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.105 [2024-11-15 10:39:16.576668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.105 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.362 BaseBdev2 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.362 [ 00:10:46.362 { 00:10:46.362 "name": "BaseBdev2", 00:10:46.362 "aliases": [ 00:10:46.362 "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675" 00:10:46.362 ], 00:10:46.362 "product_name": "Malloc disk", 00:10:46.362 "block_size": 512, 00:10:46.362 "num_blocks": 65536, 00:10:46.362 "uuid": "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675", 00:10:46.362 "assigned_rate_limits": { 00:10:46.362 "rw_ios_per_sec": 0, 00:10:46.362 "rw_mbytes_per_sec": 0, 00:10:46.362 "r_mbytes_per_sec": 0, 00:10:46.362 "w_mbytes_per_sec": 0 00:10:46.362 }, 00:10:46.362 "claimed": false, 00:10:46.362 "zoned": false, 00:10:46.362 "supported_io_types": { 00:10:46.362 "read": true, 00:10:46.362 "write": true, 00:10:46.362 "unmap": true, 00:10:46.362 "flush": true, 00:10:46.362 "reset": true, 00:10:46.362 "nvme_admin": false, 00:10:46.362 "nvme_io": false, 00:10:46.362 "nvme_io_md": false, 00:10:46.362 "write_zeroes": true, 00:10:46.362 "zcopy": true, 00:10:46.362 "get_zone_info": false, 00:10:46.362 "zone_management": false, 00:10:46.362 "zone_append": false, 00:10:46.362 "compare": false, 00:10:46.362 "compare_and_write": false, 00:10:46.362 "abort": true, 00:10:46.362 "seek_hole": false, 00:10:46.362 "seek_data": false, 00:10:46.362 "copy": true, 00:10:46.362 "nvme_iov_md": false 00:10:46.362 }, 00:10:46.362 "memory_domains": [ 00:10:46.362 { 00:10:46.362 "dma_device_id": "system", 00:10:46.362 "dma_device_type": 1 00:10:46.362 }, 00:10:46.362 { 00:10:46.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.362 "dma_device_type": 2 00:10:46.362 } 00:10:46.362 ], 00:10:46.362 "driver_specific": {} 00:10:46.362 } 00:10:46.362 ] 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.362 BaseBdev3 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.362 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 [ 00:10:46.363 { 00:10:46.363 "name": "BaseBdev3", 00:10:46.363 "aliases": [ 00:10:46.363 "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11" 00:10:46.363 ], 00:10:46.363 "product_name": "Malloc disk", 00:10:46.363 "block_size": 512, 00:10:46.363 "num_blocks": 65536, 00:10:46.363 "uuid": "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11", 00:10:46.363 "assigned_rate_limits": { 00:10:46.363 "rw_ios_per_sec": 0, 00:10:46.363 "rw_mbytes_per_sec": 0, 00:10:46.363 "r_mbytes_per_sec": 0, 00:10:46.363 "w_mbytes_per_sec": 0 00:10:46.363 }, 00:10:46.363 "claimed": false, 00:10:46.363 "zoned": false, 00:10:46.363 "supported_io_types": { 00:10:46.363 "read": true, 00:10:46.363 "write": true, 00:10:46.363 "unmap": true, 00:10:46.363 "flush": true, 00:10:46.363 "reset": true, 00:10:46.363 "nvme_admin": false, 00:10:46.363 "nvme_io": false, 00:10:46.363 "nvme_io_md": false, 00:10:46.363 "write_zeroes": true, 00:10:46.363 "zcopy": true, 00:10:46.363 "get_zone_info": false, 00:10:46.363 "zone_management": false, 00:10:46.363 "zone_append": false, 00:10:46.363 "compare": false, 00:10:46.363 "compare_and_write": false, 00:10:46.363 "abort": true, 00:10:46.363 "seek_hole": false, 00:10:46.363 "seek_data": false, 00:10:46.363 "copy": true, 00:10:46.363 "nvme_iov_md": false 00:10:46.363 }, 00:10:46.363 "memory_domains": [ 00:10:46.363 { 00:10:46.363 "dma_device_id": "system", 00:10:46.363 "dma_device_type": 1 00:10:46.363 }, 00:10:46.363 { 00:10:46.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.363 "dma_device_type": 2 00:10:46.363 } 00:10:46.363 ], 00:10:46.363 "driver_specific": {} 00:10:46.363 } 00:10:46.363 ] 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 [2024-11-15 10:39:16.858485] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.363 [2024-11-15 10:39:16.858673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.363 [2024-11-15 10:39:16.858812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.363 [2024-11-15 10:39:16.861150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.363 "name": "Existed_Raid", 00:10:46.363 "uuid": "96c391a7-d009-471f-92af-544d3732c897", 00:10:46.363 "strip_size_kb": 64, 00:10:46.363 "state": "configuring", 00:10:46.363 "raid_level": "raid0", 00:10:46.363 "superblock": true, 00:10:46.363 "num_base_bdevs": 3, 00:10:46.363 "num_base_bdevs_discovered": 2, 00:10:46.363 "num_base_bdevs_operational": 3, 00:10:46.363 "base_bdevs_list": [ 00:10:46.363 { 00:10:46.363 "name": "BaseBdev1", 00:10:46.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.363 "is_configured": false, 00:10:46.363 "data_offset": 0, 00:10:46.363 "data_size": 0 00:10:46.363 }, 00:10:46.363 { 00:10:46.363 "name": "BaseBdev2", 00:10:46.363 "uuid": "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675", 00:10:46.363 "is_configured": true, 00:10:46.363 "data_offset": 2048, 00:10:46.363 "data_size": 63488 00:10:46.363 }, 00:10:46.363 { 00:10:46.363 "name": "BaseBdev3", 00:10:46.363 "uuid": "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11", 00:10:46.363 "is_configured": true, 00:10:46.363 "data_offset": 2048, 00:10:46.363 "data_size": 63488 00:10:46.363 } 00:10:46.363 ] 00:10:46.363 }' 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.363 10:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.929 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:46.929 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.929 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.929 [2024-11-15 10:39:17.378618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.929 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.929 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:46.929 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.929 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.929 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.929 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.930 "name": "Existed_Raid", 00:10:46.930 "uuid": "96c391a7-d009-471f-92af-544d3732c897", 00:10:46.930 "strip_size_kb": 64, 00:10:46.930 "state": "configuring", 00:10:46.930 "raid_level": "raid0", 00:10:46.930 "superblock": true, 00:10:46.930 "num_base_bdevs": 3, 00:10:46.930 "num_base_bdevs_discovered": 1, 00:10:46.930 "num_base_bdevs_operational": 3, 00:10:46.930 "base_bdevs_list": [ 00:10:46.930 { 00:10:46.930 "name": "BaseBdev1", 00:10:46.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.930 "is_configured": false, 00:10:46.930 "data_offset": 0, 00:10:46.930 "data_size": 0 00:10:46.930 }, 00:10:46.930 { 00:10:46.930 "name": null, 00:10:46.930 "uuid": "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675", 00:10:46.930 "is_configured": false, 00:10:46.930 "data_offset": 0, 00:10:46.930 "data_size": 63488 00:10:46.930 }, 00:10:46.930 { 00:10:46.930 "name": "BaseBdev3", 00:10:46.930 "uuid": "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11", 00:10:46.930 "is_configured": true, 00:10:46.930 "data_offset": 2048, 00:10:46.930 "data_size": 63488 00:10:46.930 } 00:10:46.930 ] 00:10:46.930 }' 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.930 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.497 [2024-11-15 10:39:17.972210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.497 BaseBdev1 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.497 10:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.497 [ 00:10:47.497 { 00:10:47.497 "name": "BaseBdev1", 00:10:47.497 "aliases": [ 00:10:47.497 "0c93b213-fec5-46e0-a14f-f27b12796c84" 00:10:47.497 ], 00:10:47.497 "product_name": "Malloc disk", 00:10:47.497 "block_size": 512, 00:10:47.497 "num_blocks": 65536, 00:10:47.497 "uuid": "0c93b213-fec5-46e0-a14f-f27b12796c84", 00:10:47.497 "assigned_rate_limits": { 00:10:47.497 "rw_ios_per_sec": 0, 00:10:47.497 "rw_mbytes_per_sec": 0, 00:10:47.497 "r_mbytes_per_sec": 0, 00:10:47.497 "w_mbytes_per_sec": 0 00:10:47.497 }, 00:10:47.497 "claimed": true, 00:10:47.497 "claim_type": "exclusive_write", 00:10:47.497 "zoned": false, 00:10:47.497 "supported_io_types": { 00:10:47.497 "read": true, 00:10:47.497 "write": true, 00:10:47.497 "unmap": true, 00:10:47.497 "flush": true, 00:10:47.497 "reset": true, 00:10:47.497 "nvme_admin": false, 00:10:47.497 "nvme_io": false, 00:10:47.497 "nvme_io_md": false, 00:10:47.497 "write_zeroes": true, 00:10:47.497 "zcopy": true, 00:10:47.497 "get_zone_info": false, 00:10:47.497 "zone_management": false, 00:10:47.497 "zone_append": false, 00:10:47.497 "compare": false, 00:10:47.497 "compare_and_write": false, 00:10:47.497 "abort": true, 00:10:47.497 "seek_hole": false, 00:10:47.497 "seek_data": false, 00:10:47.497 "copy": true, 00:10:47.497 "nvme_iov_md": false 00:10:47.497 }, 00:10:47.497 "memory_domains": [ 00:10:47.497 { 00:10:47.497 "dma_device_id": "system", 00:10:47.497 "dma_device_type": 1 00:10:47.497 }, 00:10:47.497 { 00:10:47.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.497 "dma_device_type": 2 00:10:47.497 } 00:10:47.497 ], 00:10:47.497 "driver_specific": {} 00:10:47.497 } 00:10:47.497 ] 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.497 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.755 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.755 "name": "Existed_Raid", 00:10:47.755 "uuid": "96c391a7-d009-471f-92af-544d3732c897", 00:10:47.755 "strip_size_kb": 64, 00:10:47.755 "state": "configuring", 00:10:47.755 "raid_level": "raid0", 00:10:47.755 "superblock": true, 00:10:47.755 "num_base_bdevs": 3, 00:10:47.755 "num_base_bdevs_discovered": 2, 00:10:47.755 "num_base_bdevs_operational": 3, 00:10:47.755 "base_bdevs_list": [ 00:10:47.755 { 00:10:47.755 "name": "BaseBdev1", 00:10:47.755 "uuid": "0c93b213-fec5-46e0-a14f-f27b12796c84", 00:10:47.755 "is_configured": true, 00:10:47.755 "data_offset": 2048, 00:10:47.755 "data_size": 63488 00:10:47.755 }, 00:10:47.755 { 00:10:47.755 "name": null, 00:10:47.755 "uuid": "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675", 00:10:47.755 "is_configured": false, 00:10:47.755 "data_offset": 0, 00:10:47.755 "data_size": 63488 00:10:47.755 }, 00:10:47.755 { 00:10:47.755 "name": "BaseBdev3", 00:10:47.755 "uuid": "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11", 00:10:47.755 "is_configured": true, 00:10:47.755 "data_offset": 2048, 00:10:47.755 "data_size": 63488 00:10:47.755 } 00:10:47.755 ] 00:10:47.755 }' 00:10:47.755 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.755 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.013 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.013 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.013 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.013 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.270 [2024-11-15 10:39:18.620469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.270 "name": "Existed_Raid", 00:10:48.270 "uuid": "96c391a7-d009-471f-92af-544d3732c897", 00:10:48.270 "strip_size_kb": 64, 00:10:48.270 "state": "configuring", 00:10:48.270 "raid_level": "raid0", 00:10:48.270 "superblock": true, 00:10:48.270 "num_base_bdevs": 3, 00:10:48.270 "num_base_bdevs_discovered": 1, 00:10:48.270 "num_base_bdevs_operational": 3, 00:10:48.270 "base_bdevs_list": [ 00:10:48.270 { 00:10:48.270 "name": "BaseBdev1", 00:10:48.270 "uuid": "0c93b213-fec5-46e0-a14f-f27b12796c84", 00:10:48.270 "is_configured": true, 00:10:48.270 "data_offset": 2048, 00:10:48.270 "data_size": 63488 00:10:48.270 }, 00:10:48.270 { 00:10:48.270 "name": null, 00:10:48.270 "uuid": "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675", 00:10:48.270 "is_configured": false, 00:10:48.270 "data_offset": 0, 00:10:48.270 "data_size": 63488 00:10:48.270 }, 00:10:48.270 { 00:10:48.270 "name": null, 00:10:48.270 "uuid": "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11", 00:10:48.270 "is_configured": false, 00:10:48.270 "data_offset": 0, 00:10:48.270 "data_size": 63488 00:10:48.270 } 00:10:48.270 ] 00:10:48.270 }' 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.270 10:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.834 [2024-11-15 10:39:19.184640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.834 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.835 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.835 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.835 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.835 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.835 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.835 "name": "Existed_Raid", 00:10:48.835 "uuid": "96c391a7-d009-471f-92af-544d3732c897", 00:10:48.835 "strip_size_kb": 64, 00:10:48.835 "state": "configuring", 00:10:48.835 "raid_level": "raid0", 00:10:48.835 "superblock": true, 00:10:48.835 "num_base_bdevs": 3, 00:10:48.835 "num_base_bdevs_discovered": 2, 00:10:48.835 "num_base_bdevs_operational": 3, 00:10:48.835 "base_bdevs_list": [ 00:10:48.835 { 00:10:48.835 "name": "BaseBdev1", 00:10:48.835 "uuid": "0c93b213-fec5-46e0-a14f-f27b12796c84", 00:10:48.835 "is_configured": true, 00:10:48.835 "data_offset": 2048, 00:10:48.835 "data_size": 63488 00:10:48.835 }, 00:10:48.835 { 00:10:48.835 "name": null, 00:10:48.835 "uuid": "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675", 00:10:48.835 "is_configured": false, 00:10:48.835 "data_offset": 0, 00:10:48.835 "data_size": 63488 00:10:48.835 }, 00:10:48.835 { 00:10:48.835 "name": "BaseBdev3", 00:10:48.835 "uuid": "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11", 00:10:48.835 "is_configured": true, 00:10:48.835 "data_offset": 2048, 00:10:48.835 "data_size": 63488 00:10:48.835 } 00:10:48.835 ] 00:10:48.835 }' 00:10:48.835 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.835 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.401 [2024-11-15 10:39:19.744825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.401 "name": "Existed_Raid", 00:10:49.401 "uuid": "96c391a7-d009-471f-92af-544d3732c897", 00:10:49.401 "strip_size_kb": 64, 00:10:49.401 "state": "configuring", 00:10:49.401 "raid_level": "raid0", 00:10:49.401 "superblock": true, 00:10:49.401 "num_base_bdevs": 3, 00:10:49.401 "num_base_bdevs_discovered": 1, 00:10:49.401 "num_base_bdevs_operational": 3, 00:10:49.401 "base_bdevs_list": [ 00:10:49.401 { 00:10:49.401 "name": null, 00:10:49.401 "uuid": "0c93b213-fec5-46e0-a14f-f27b12796c84", 00:10:49.401 "is_configured": false, 00:10:49.401 "data_offset": 0, 00:10:49.401 "data_size": 63488 00:10:49.401 }, 00:10:49.401 { 00:10:49.401 "name": null, 00:10:49.401 "uuid": "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675", 00:10:49.401 "is_configured": false, 00:10:49.401 "data_offset": 0, 00:10:49.401 "data_size": 63488 00:10:49.401 }, 00:10:49.401 { 00:10:49.401 "name": "BaseBdev3", 00:10:49.401 "uuid": "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11", 00:10:49.401 "is_configured": true, 00:10:49.401 "data_offset": 2048, 00:10:49.401 "data_size": 63488 00:10:49.401 } 00:10:49.401 ] 00:10:49.401 }' 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.401 10:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 [2024-11-15 10:39:20.405129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.967 "name": "Existed_Raid", 00:10:49.967 "uuid": "96c391a7-d009-471f-92af-544d3732c897", 00:10:49.967 "strip_size_kb": 64, 00:10:49.967 "state": "configuring", 00:10:49.967 "raid_level": "raid0", 00:10:49.967 "superblock": true, 00:10:49.967 "num_base_bdevs": 3, 00:10:49.967 "num_base_bdevs_discovered": 2, 00:10:49.967 "num_base_bdevs_operational": 3, 00:10:49.967 "base_bdevs_list": [ 00:10:49.967 { 00:10:49.967 "name": null, 00:10:49.967 "uuid": "0c93b213-fec5-46e0-a14f-f27b12796c84", 00:10:49.967 "is_configured": false, 00:10:49.967 "data_offset": 0, 00:10:49.967 "data_size": 63488 00:10:49.967 }, 00:10:49.967 { 00:10:49.967 "name": "BaseBdev2", 00:10:49.967 "uuid": "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675", 00:10:49.967 "is_configured": true, 00:10:49.967 "data_offset": 2048, 00:10:49.967 "data_size": 63488 00:10:49.967 }, 00:10:49.967 { 00:10:49.967 "name": "BaseBdev3", 00:10:49.967 "uuid": "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11", 00:10:49.967 "is_configured": true, 00:10:49.967 "data_offset": 2048, 00:10:49.967 "data_size": 63488 00:10:49.967 } 00:10:49.967 ] 00:10:49.967 }' 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.967 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.533 10:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0c93b213-fec5-46e0-a14f-f27b12796c84 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.533 [2024-11-15 10:39:21.074736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:50.533 NewBaseBdev 00:10:50.533 [2024-11-15 10:39:21.075211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:50.533 [2024-11-15 10:39:21.075243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:50.533 [2024-11-15 10:39:21.075575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:50.533 [2024-11-15 10:39:21.075755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:50.533 [2024-11-15 10:39:21.075772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:50.533 [2024-11-15 10:39:21.075935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.533 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.792 [ 00:10:50.792 { 00:10:50.792 "name": "NewBaseBdev", 00:10:50.792 "aliases": [ 00:10:50.792 "0c93b213-fec5-46e0-a14f-f27b12796c84" 00:10:50.792 ], 00:10:50.792 "product_name": "Malloc disk", 00:10:50.792 "block_size": 512, 00:10:50.792 "num_blocks": 65536, 00:10:50.792 "uuid": "0c93b213-fec5-46e0-a14f-f27b12796c84", 00:10:50.792 "assigned_rate_limits": { 00:10:50.792 "rw_ios_per_sec": 0, 00:10:50.792 "rw_mbytes_per_sec": 0, 00:10:50.792 "r_mbytes_per_sec": 0, 00:10:50.792 "w_mbytes_per_sec": 0 00:10:50.792 }, 00:10:50.792 "claimed": true, 00:10:50.792 "claim_type": "exclusive_write", 00:10:50.792 "zoned": false, 00:10:50.792 "supported_io_types": { 00:10:50.792 "read": true, 00:10:50.792 "write": true, 00:10:50.792 "unmap": true, 00:10:50.792 "flush": true, 00:10:50.792 "reset": true, 00:10:50.792 "nvme_admin": false, 00:10:50.792 "nvme_io": false, 00:10:50.792 "nvme_io_md": false, 00:10:50.792 "write_zeroes": true, 00:10:50.792 "zcopy": true, 00:10:50.792 "get_zone_info": false, 00:10:50.792 "zone_management": false, 00:10:50.792 "zone_append": false, 00:10:50.792 "compare": false, 00:10:50.792 "compare_and_write": false, 00:10:50.792 "abort": true, 00:10:50.792 "seek_hole": false, 00:10:50.792 "seek_data": false, 00:10:50.792 "copy": true, 00:10:50.792 "nvme_iov_md": false 00:10:50.792 }, 00:10:50.792 "memory_domains": [ 00:10:50.792 { 00:10:50.792 "dma_device_id": "system", 00:10:50.792 "dma_device_type": 1 00:10:50.792 }, 00:10:50.792 { 00:10:50.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.792 "dma_device_type": 2 00:10:50.792 } 00:10:50.792 ], 00:10:50.792 "driver_specific": {} 00:10:50.792 } 00:10:50.792 ] 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.792 "name": "Existed_Raid", 00:10:50.792 "uuid": "96c391a7-d009-471f-92af-544d3732c897", 00:10:50.792 "strip_size_kb": 64, 00:10:50.792 "state": "online", 00:10:50.792 "raid_level": "raid0", 00:10:50.792 "superblock": true, 00:10:50.792 "num_base_bdevs": 3, 00:10:50.792 "num_base_bdevs_discovered": 3, 00:10:50.792 "num_base_bdevs_operational": 3, 00:10:50.792 "base_bdevs_list": [ 00:10:50.792 { 00:10:50.792 "name": "NewBaseBdev", 00:10:50.792 "uuid": "0c93b213-fec5-46e0-a14f-f27b12796c84", 00:10:50.792 "is_configured": true, 00:10:50.792 "data_offset": 2048, 00:10:50.792 "data_size": 63488 00:10:50.792 }, 00:10:50.792 { 00:10:50.792 "name": "BaseBdev2", 00:10:50.792 "uuid": "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675", 00:10:50.792 "is_configured": true, 00:10:50.792 "data_offset": 2048, 00:10:50.792 "data_size": 63488 00:10:50.792 }, 00:10:50.792 { 00:10:50.792 "name": "BaseBdev3", 00:10:50.792 "uuid": "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11", 00:10:50.792 "is_configured": true, 00:10:50.792 "data_offset": 2048, 00:10:50.792 "data_size": 63488 00:10:50.792 } 00:10:50.792 ] 00:10:50.792 }' 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.792 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.359 [2024-11-15 10:39:21.619302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.359 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.359 "name": "Existed_Raid", 00:10:51.359 "aliases": [ 00:10:51.359 "96c391a7-d009-471f-92af-544d3732c897" 00:10:51.359 ], 00:10:51.359 "product_name": "Raid Volume", 00:10:51.359 "block_size": 512, 00:10:51.359 "num_blocks": 190464, 00:10:51.359 "uuid": "96c391a7-d009-471f-92af-544d3732c897", 00:10:51.359 "assigned_rate_limits": { 00:10:51.359 "rw_ios_per_sec": 0, 00:10:51.359 "rw_mbytes_per_sec": 0, 00:10:51.359 "r_mbytes_per_sec": 0, 00:10:51.359 "w_mbytes_per_sec": 0 00:10:51.359 }, 00:10:51.359 "claimed": false, 00:10:51.359 "zoned": false, 00:10:51.359 "supported_io_types": { 00:10:51.359 "read": true, 00:10:51.359 "write": true, 00:10:51.359 "unmap": true, 00:10:51.359 "flush": true, 00:10:51.359 "reset": true, 00:10:51.359 "nvme_admin": false, 00:10:51.359 "nvme_io": false, 00:10:51.359 "nvme_io_md": false, 00:10:51.359 "write_zeroes": true, 00:10:51.359 "zcopy": false, 00:10:51.359 "get_zone_info": false, 00:10:51.359 "zone_management": false, 00:10:51.359 "zone_append": false, 00:10:51.359 "compare": false, 00:10:51.359 "compare_and_write": false, 00:10:51.359 "abort": false, 00:10:51.359 "seek_hole": false, 00:10:51.359 "seek_data": false, 00:10:51.359 "copy": false, 00:10:51.359 "nvme_iov_md": false 00:10:51.359 }, 00:10:51.359 "memory_domains": [ 00:10:51.359 { 00:10:51.359 "dma_device_id": "system", 00:10:51.359 "dma_device_type": 1 00:10:51.359 }, 00:10:51.359 { 00:10:51.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.359 "dma_device_type": 2 00:10:51.359 }, 00:10:51.359 { 00:10:51.359 "dma_device_id": "system", 00:10:51.359 "dma_device_type": 1 00:10:51.359 }, 00:10:51.359 { 00:10:51.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.359 "dma_device_type": 2 00:10:51.359 }, 00:10:51.359 { 00:10:51.359 "dma_device_id": "system", 00:10:51.359 "dma_device_type": 1 00:10:51.360 }, 00:10:51.360 { 00:10:51.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.360 "dma_device_type": 2 00:10:51.360 } 00:10:51.360 ], 00:10:51.360 "driver_specific": { 00:10:51.360 "raid": { 00:10:51.360 "uuid": "96c391a7-d009-471f-92af-544d3732c897", 00:10:51.360 "strip_size_kb": 64, 00:10:51.360 "state": "online", 00:10:51.360 "raid_level": "raid0", 00:10:51.360 "superblock": true, 00:10:51.360 "num_base_bdevs": 3, 00:10:51.360 "num_base_bdevs_discovered": 3, 00:10:51.360 "num_base_bdevs_operational": 3, 00:10:51.360 "base_bdevs_list": [ 00:10:51.360 { 00:10:51.360 "name": "NewBaseBdev", 00:10:51.360 "uuid": "0c93b213-fec5-46e0-a14f-f27b12796c84", 00:10:51.360 "is_configured": true, 00:10:51.360 "data_offset": 2048, 00:10:51.360 "data_size": 63488 00:10:51.360 }, 00:10:51.360 { 00:10:51.360 "name": "BaseBdev2", 00:10:51.360 "uuid": "5f6fbf3d-9bd1-43f4-8b7a-8f01364a2675", 00:10:51.360 "is_configured": true, 00:10:51.360 "data_offset": 2048, 00:10:51.360 "data_size": 63488 00:10:51.360 }, 00:10:51.360 { 00:10:51.360 "name": "BaseBdev3", 00:10:51.360 "uuid": "bdd8d0d6-a588-4b68-91b5-7f0ebeb7dd11", 00:10:51.360 "is_configured": true, 00:10:51.360 "data_offset": 2048, 00:10:51.360 "data_size": 63488 00:10:51.360 } 00:10:51.360 ] 00:10:51.360 } 00:10:51.360 } 00:10:51.360 }' 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:51.360 BaseBdev2 00:10:51.360 BaseBdev3' 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.360 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.618 [2024-11-15 10:39:21.926992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:51.618 [2024-11-15 10:39:21.927026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.618 [2024-11-15 10:39:21.927127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.618 [2024-11-15 10:39:21.927201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.618 [2024-11-15 10:39:21.927220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64654 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64654 ']' 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64654 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64654 00:10:51.618 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:51.619 killing process with pid 64654 00:10:51.619 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:51.619 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64654' 00:10:51.619 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64654 00:10:51.619 [2024-11-15 10:39:21.966219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.619 10:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64654 00:10:51.876 [2024-11-15 10:39:22.217624] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.810 10:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:52.810 00:10:52.810 real 0m11.665s 00:10:52.810 user 0m19.606s 00:10:52.810 sys 0m1.385s 00:10:52.810 ************************************ 00:10:52.810 END TEST raid_state_function_test_sb 00:10:52.810 ************************************ 00:10:52.810 10:39:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.810 10:39:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.810 10:39:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:52.810 10:39:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:52.810 10:39:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.810 10:39:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.810 ************************************ 00:10:52.811 START TEST raid_superblock_test 00:10:52.811 ************************************ 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:52.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65285 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65285 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65285 ']' 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:52.811 10:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.069 [2024-11-15 10:39:23.369015] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:10:53.069 [2024-11-15 10:39:23.369306] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65285 ] 00:10:53.069 [2024-11-15 10:39:23.554266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.327 [2024-11-15 10:39:23.657141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.327 [2024-11-15 10:39:23.838633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.327 [2024-11-15 10:39:23.838703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.895 malloc1 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.895 [2024-11-15 10:39:24.398777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:53.895 [2024-11-15 10:39:24.398983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.895 [2024-11-15 10:39:24.399061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:53.895 [2024-11-15 10:39:24.399337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.895 [2024-11-15 10:39:24.401925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.895 [2024-11-15 10:39:24.402090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:53.895 pt1 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.895 malloc2 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.895 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.895 [2024-11-15 10:39:24.450169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.895 [2024-11-15 10:39:24.450385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.895 [2024-11-15 10:39:24.450481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:53.895 [2024-11-15 10:39:24.450585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.154 [2024-11-15 10:39:24.453165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.154 [2024-11-15 10:39:24.453333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:54.154 pt2 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.154 malloc3 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.154 [2024-11-15 10:39:24.513682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:54.154 [2024-11-15 10:39:24.513876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.154 [2024-11-15 10:39:24.513955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:54.154 [2024-11-15 10:39:24.514066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.154 [2024-11-15 10:39:24.516668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.154 [2024-11-15 10:39:24.516823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:54.154 pt3 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.154 [2024-11-15 10:39:24.525809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:54.154 [2024-11-15 10:39:24.528032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:54.154 [2024-11-15 10:39:24.528259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:54.154 [2024-11-15 10:39:24.528503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:54.154 [2024-11-15 10:39:24.528528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:54.154 [2024-11-15 10:39:24.528837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:54.154 [2024-11-15 10:39:24.529033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:54.154 [2024-11-15 10:39:24.529048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:54.154 [2024-11-15 10:39:24.529232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.154 "name": "raid_bdev1", 00:10:54.154 "uuid": "039dd53a-2eb1-45e1-a7e8-1cdce4ecea33", 00:10:54.154 "strip_size_kb": 64, 00:10:54.154 "state": "online", 00:10:54.154 "raid_level": "raid0", 00:10:54.154 "superblock": true, 00:10:54.154 "num_base_bdevs": 3, 00:10:54.154 "num_base_bdevs_discovered": 3, 00:10:54.154 "num_base_bdevs_operational": 3, 00:10:54.154 "base_bdevs_list": [ 00:10:54.154 { 00:10:54.154 "name": "pt1", 00:10:54.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:54.154 "is_configured": true, 00:10:54.154 "data_offset": 2048, 00:10:54.154 "data_size": 63488 00:10:54.154 }, 00:10:54.154 { 00:10:54.154 "name": "pt2", 00:10:54.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.154 "is_configured": true, 00:10:54.154 "data_offset": 2048, 00:10:54.154 "data_size": 63488 00:10:54.154 }, 00:10:54.154 { 00:10:54.154 "name": "pt3", 00:10:54.154 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:54.154 "is_configured": true, 00:10:54.154 "data_offset": 2048, 00:10:54.154 "data_size": 63488 00:10:54.154 } 00:10:54.154 ] 00:10:54.154 }' 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.154 10:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.721 [2024-11-15 10:39:25.038294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.721 "name": "raid_bdev1", 00:10:54.721 "aliases": [ 00:10:54.721 "039dd53a-2eb1-45e1-a7e8-1cdce4ecea33" 00:10:54.721 ], 00:10:54.721 "product_name": "Raid Volume", 00:10:54.721 "block_size": 512, 00:10:54.721 "num_blocks": 190464, 00:10:54.721 "uuid": "039dd53a-2eb1-45e1-a7e8-1cdce4ecea33", 00:10:54.721 "assigned_rate_limits": { 00:10:54.721 "rw_ios_per_sec": 0, 00:10:54.721 "rw_mbytes_per_sec": 0, 00:10:54.721 "r_mbytes_per_sec": 0, 00:10:54.721 "w_mbytes_per_sec": 0 00:10:54.721 }, 00:10:54.721 "claimed": false, 00:10:54.721 "zoned": false, 00:10:54.721 "supported_io_types": { 00:10:54.721 "read": true, 00:10:54.721 "write": true, 00:10:54.721 "unmap": true, 00:10:54.721 "flush": true, 00:10:54.721 "reset": true, 00:10:54.721 "nvme_admin": false, 00:10:54.721 "nvme_io": false, 00:10:54.721 "nvme_io_md": false, 00:10:54.721 "write_zeroes": true, 00:10:54.721 "zcopy": false, 00:10:54.721 "get_zone_info": false, 00:10:54.721 "zone_management": false, 00:10:54.721 "zone_append": false, 00:10:54.721 "compare": false, 00:10:54.721 "compare_and_write": false, 00:10:54.721 "abort": false, 00:10:54.721 "seek_hole": false, 00:10:54.721 "seek_data": false, 00:10:54.721 "copy": false, 00:10:54.721 "nvme_iov_md": false 00:10:54.721 }, 00:10:54.721 "memory_domains": [ 00:10:54.721 { 00:10:54.721 "dma_device_id": "system", 00:10:54.721 "dma_device_type": 1 00:10:54.721 }, 00:10:54.721 { 00:10:54.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.721 "dma_device_type": 2 00:10:54.721 }, 00:10:54.721 { 00:10:54.721 "dma_device_id": "system", 00:10:54.721 "dma_device_type": 1 00:10:54.721 }, 00:10:54.721 { 00:10:54.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.721 "dma_device_type": 2 00:10:54.721 }, 00:10:54.721 { 00:10:54.721 "dma_device_id": "system", 00:10:54.721 "dma_device_type": 1 00:10:54.721 }, 00:10:54.721 { 00:10:54.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.721 "dma_device_type": 2 00:10:54.721 } 00:10:54.721 ], 00:10:54.721 "driver_specific": { 00:10:54.721 "raid": { 00:10:54.721 "uuid": "039dd53a-2eb1-45e1-a7e8-1cdce4ecea33", 00:10:54.721 "strip_size_kb": 64, 00:10:54.721 "state": "online", 00:10:54.721 "raid_level": "raid0", 00:10:54.721 "superblock": true, 00:10:54.721 "num_base_bdevs": 3, 00:10:54.721 "num_base_bdevs_discovered": 3, 00:10:54.721 "num_base_bdevs_operational": 3, 00:10:54.721 "base_bdevs_list": [ 00:10:54.721 { 00:10:54.721 "name": "pt1", 00:10:54.721 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:54.721 "is_configured": true, 00:10:54.721 "data_offset": 2048, 00:10:54.721 "data_size": 63488 00:10:54.721 }, 00:10:54.721 { 00:10:54.721 "name": "pt2", 00:10:54.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.721 "is_configured": true, 00:10:54.721 "data_offset": 2048, 00:10:54.721 "data_size": 63488 00:10:54.721 }, 00:10:54.721 { 00:10:54.721 "name": "pt3", 00:10:54.721 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:54.721 "is_configured": true, 00:10:54.721 "data_offset": 2048, 00:10:54.721 "data_size": 63488 00:10:54.721 } 00:10:54.721 ] 00:10:54.721 } 00:10:54.721 } 00:10:54.721 }' 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:54.721 pt2 00:10:54.721 pt3' 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.721 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.979 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.980 [2024-11-15 10:39:25.394340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=039dd53a-2eb1-45e1-a7e8-1cdce4ecea33 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 039dd53a-2eb1-45e1-a7e8-1cdce4ecea33 ']' 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.980 [2024-11-15 10:39:25.454010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.980 [2024-11-15 10:39:25.454046] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.980 [2024-11-15 10:39:25.454140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.980 [2024-11-15 10:39:25.454224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.980 [2024-11-15 10:39:25.454242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.980 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.239 [2024-11-15 10:39:25.582083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:55.239 [2024-11-15 10:39:25.584353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:55.239 [2024-11-15 10:39:25.584584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:55.239 [2024-11-15 10:39:25.584705] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:55.239 [2024-11-15 10:39:25.584969] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:55.239 [2024-11-15 10:39:25.585169] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:55.239 [2024-11-15 10:39:25.585454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.239 [2024-11-15 10:39:25.585632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:55.239 request: 00:10:55.239 { 00:10:55.239 "name": "raid_bdev1", 00:10:55.239 "raid_level": "raid0", 00:10:55.239 "base_bdevs": [ 00:10:55.239 "malloc1", 00:10:55.239 "malloc2", 00:10:55.239 "malloc3" 00:10:55.239 ], 00:10:55.239 "strip_size_kb": 64, 00:10:55.239 "superblock": false, 00:10:55.239 "method": "bdev_raid_create", 00:10:55.239 "req_id": 1 00:10:55.239 } 00:10:55.239 Got JSON-RPC error response 00:10:55.239 response: 00:10:55.239 { 00:10:55.239 "code": -17, 00:10:55.239 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:55.239 } 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.239 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.239 [2024-11-15 10:39:25.646204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:55.240 [2024-11-15 10:39:25.646288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.240 [2024-11-15 10:39:25.646323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:55.240 [2024-11-15 10:39:25.646339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.240 [2024-11-15 10:39:25.649005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.240 [2024-11-15 10:39:25.649052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:55.240 [2024-11-15 10:39:25.649160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:55.240 [2024-11-15 10:39:25.649228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:55.240 pt1 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.240 "name": "raid_bdev1", 00:10:55.240 "uuid": "039dd53a-2eb1-45e1-a7e8-1cdce4ecea33", 00:10:55.240 "strip_size_kb": 64, 00:10:55.240 "state": "configuring", 00:10:55.240 "raid_level": "raid0", 00:10:55.240 "superblock": true, 00:10:55.240 "num_base_bdevs": 3, 00:10:55.240 "num_base_bdevs_discovered": 1, 00:10:55.240 "num_base_bdevs_operational": 3, 00:10:55.240 "base_bdevs_list": [ 00:10:55.240 { 00:10:55.240 "name": "pt1", 00:10:55.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.240 "is_configured": true, 00:10:55.240 "data_offset": 2048, 00:10:55.240 "data_size": 63488 00:10:55.240 }, 00:10:55.240 { 00:10:55.240 "name": null, 00:10:55.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.240 "is_configured": false, 00:10:55.240 "data_offset": 2048, 00:10:55.240 "data_size": 63488 00:10:55.240 }, 00:10:55.240 { 00:10:55.240 "name": null, 00:10:55.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.240 "is_configured": false, 00:10:55.240 "data_offset": 2048, 00:10:55.240 "data_size": 63488 00:10:55.240 } 00:10:55.240 ] 00:10:55.240 }' 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.240 10:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.806 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:55.806 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:55.806 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.806 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.806 [2024-11-15 10:39:26.126336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:55.806 [2024-11-15 10:39:26.126429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.806 [2024-11-15 10:39:26.126466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:55.806 [2024-11-15 10:39:26.126481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.807 [2024-11-15 10:39:26.127015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.807 [2024-11-15 10:39:26.127059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:55.807 [2024-11-15 10:39:26.127174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:55.807 [2024-11-15 10:39:26.127214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:55.807 pt2 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.807 [2024-11-15 10:39:26.134324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.807 "name": "raid_bdev1", 00:10:55.807 "uuid": "039dd53a-2eb1-45e1-a7e8-1cdce4ecea33", 00:10:55.807 "strip_size_kb": 64, 00:10:55.807 "state": "configuring", 00:10:55.807 "raid_level": "raid0", 00:10:55.807 "superblock": true, 00:10:55.807 "num_base_bdevs": 3, 00:10:55.807 "num_base_bdevs_discovered": 1, 00:10:55.807 "num_base_bdevs_operational": 3, 00:10:55.807 "base_bdevs_list": [ 00:10:55.807 { 00:10:55.807 "name": "pt1", 00:10:55.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.807 "is_configured": true, 00:10:55.807 "data_offset": 2048, 00:10:55.807 "data_size": 63488 00:10:55.807 }, 00:10:55.807 { 00:10:55.807 "name": null, 00:10:55.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.807 "is_configured": false, 00:10:55.807 "data_offset": 0, 00:10:55.807 "data_size": 63488 00:10:55.807 }, 00:10:55.807 { 00:10:55.807 "name": null, 00:10:55.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.807 "is_configured": false, 00:10:55.807 "data_offset": 2048, 00:10:55.807 "data_size": 63488 00:10:55.807 } 00:10:55.807 ] 00:10:55.807 }' 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.807 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.376 [2024-11-15 10:39:26.654463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.376 [2024-11-15 10:39:26.654552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.376 [2024-11-15 10:39:26.654580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:56.376 [2024-11-15 10:39:26.654597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.376 [2024-11-15 10:39:26.655155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.376 [2024-11-15 10:39:26.655187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.376 [2024-11-15 10:39:26.655286] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:56.376 [2024-11-15 10:39:26.655323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.376 pt2 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.376 [2024-11-15 10:39:26.662430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:56.376 [2024-11-15 10:39:26.662635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.376 [2024-11-15 10:39:26.662668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:56.376 [2024-11-15 10:39:26.662686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.376 [2024-11-15 10:39:26.663129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.376 [2024-11-15 10:39:26.663175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:56.376 [2024-11-15 10:39:26.663253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:56.376 [2024-11-15 10:39:26.663285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:56.376 [2024-11-15 10:39:26.663448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:56.376 [2024-11-15 10:39:26.663471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:56.376 [2024-11-15 10:39:26.663780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:56.376 [2024-11-15 10:39:26.663972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:56.376 [2024-11-15 10:39:26.663988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:56.376 [2024-11-15 10:39:26.664154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.376 pt3 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.376 "name": "raid_bdev1", 00:10:56.376 "uuid": "039dd53a-2eb1-45e1-a7e8-1cdce4ecea33", 00:10:56.376 "strip_size_kb": 64, 00:10:56.376 "state": "online", 00:10:56.376 "raid_level": "raid0", 00:10:56.376 "superblock": true, 00:10:56.376 "num_base_bdevs": 3, 00:10:56.376 "num_base_bdevs_discovered": 3, 00:10:56.376 "num_base_bdevs_operational": 3, 00:10:56.376 "base_bdevs_list": [ 00:10:56.376 { 00:10:56.376 "name": "pt1", 00:10:56.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.376 "is_configured": true, 00:10:56.376 "data_offset": 2048, 00:10:56.376 "data_size": 63488 00:10:56.376 }, 00:10:56.376 { 00:10:56.376 "name": "pt2", 00:10:56.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.376 "is_configured": true, 00:10:56.376 "data_offset": 2048, 00:10:56.376 "data_size": 63488 00:10:56.376 }, 00:10:56.376 { 00:10:56.376 "name": "pt3", 00:10:56.376 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.376 "is_configured": true, 00:10:56.376 "data_offset": 2048, 00:10:56.376 "data_size": 63488 00:10:56.376 } 00:10:56.376 ] 00:10:56.376 }' 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.376 10:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.943 [2024-11-15 10:39:27.203063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.943 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.943 "name": "raid_bdev1", 00:10:56.943 "aliases": [ 00:10:56.943 "039dd53a-2eb1-45e1-a7e8-1cdce4ecea33" 00:10:56.943 ], 00:10:56.943 "product_name": "Raid Volume", 00:10:56.943 "block_size": 512, 00:10:56.943 "num_blocks": 190464, 00:10:56.943 "uuid": "039dd53a-2eb1-45e1-a7e8-1cdce4ecea33", 00:10:56.943 "assigned_rate_limits": { 00:10:56.943 "rw_ios_per_sec": 0, 00:10:56.943 "rw_mbytes_per_sec": 0, 00:10:56.943 "r_mbytes_per_sec": 0, 00:10:56.943 "w_mbytes_per_sec": 0 00:10:56.943 }, 00:10:56.944 "claimed": false, 00:10:56.944 "zoned": false, 00:10:56.944 "supported_io_types": { 00:10:56.944 "read": true, 00:10:56.944 "write": true, 00:10:56.944 "unmap": true, 00:10:56.944 "flush": true, 00:10:56.944 "reset": true, 00:10:56.944 "nvme_admin": false, 00:10:56.944 "nvme_io": false, 00:10:56.944 "nvme_io_md": false, 00:10:56.944 "write_zeroes": true, 00:10:56.944 "zcopy": false, 00:10:56.944 "get_zone_info": false, 00:10:56.944 "zone_management": false, 00:10:56.944 "zone_append": false, 00:10:56.944 "compare": false, 00:10:56.944 "compare_and_write": false, 00:10:56.944 "abort": false, 00:10:56.944 "seek_hole": false, 00:10:56.944 "seek_data": false, 00:10:56.944 "copy": false, 00:10:56.944 "nvme_iov_md": false 00:10:56.944 }, 00:10:56.944 "memory_domains": [ 00:10:56.944 { 00:10:56.944 "dma_device_id": "system", 00:10:56.944 "dma_device_type": 1 00:10:56.944 }, 00:10:56.944 { 00:10:56.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.944 "dma_device_type": 2 00:10:56.944 }, 00:10:56.944 { 00:10:56.944 "dma_device_id": "system", 00:10:56.944 "dma_device_type": 1 00:10:56.944 }, 00:10:56.944 { 00:10:56.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.944 "dma_device_type": 2 00:10:56.944 }, 00:10:56.944 { 00:10:56.944 "dma_device_id": "system", 00:10:56.944 "dma_device_type": 1 00:10:56.944 }, 00:10:56.944 { 00:10:56.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.944 "dma_device_type": 2 00:10:56.944 } 00:10:56.944 ], 00:10:56.944 "driver_specific": { 00:10:56.944 "raid": { 00:10:56.944 "uuid": "039dd53a-2eb1-45e1-a7e8-1cdce4ecea33", 00:10:56.944 "strip_size_kb": 64, 00:10:56.944 "state": "online", 00:10:56.944 "raid_level": "raid0", 00:10:56.944 "superblock": true, 00:10:56.944 "num_base_bdevs": 3, 00:10:56.944 "num_base_bdevs_discovered": 3, 00:10:56.944 "num_base_bdevs_operational": 3, 00:10:56.944 "base_bdevs_list": [ 00:10:56.944 { 00:10:56.944 "name": "pt1", 00:10:56.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.944 "is_configured": true, 00:10:56.944 "data_offset": 2048, 00:10:56.944 "data_size": 63488 00:10:56.944 }, 00:10:56.944 { 00:10:56.944 "name": "pt2", 00:10:56.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.944 "is_configured": true, 00:10:56.944 "data_offset": 2048, 00:10:56.944 "data_size": 63488 00:10:56.944 }, 00:10:56.944 { 00:10:56.944 "name": "pt3", 00:10:56.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.944 "is_configured": true, 00:10:56.944 "data_offset": 2048, 00:10:56.944 "data_size": 63488 00:10:56.944 } 00:10:56.944 ] 00:10:56.944 } 00:10:56.944 } 00:10:56.944 }' 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:56.944 pt2 00:10:56.944 pt3' 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.944 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.204 [2024-11-15 10:39:27.523090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 039dd53a-2eb1-45e1-a7e8-1cdce4ecea33 '!=' 039dd53a-2eb1-45e1-a7e8-1cdce4ecea33 ']' 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65285 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65285 ']' 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65285 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65285 00:10:57.204 killing process with pid 65285 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65285' 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65285 00:10:57.204 [2024-11-15 10:39:27.602929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.204 10:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65285 00:10:57.204 [2024-11-15 10:39:27.603051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.204 [2024-11-15 10:39:27.603143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.204 [2024-11-15 10:39:27.603164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:57.466 [2024-11-15 10:39:27.857019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.402 ************************************ 00:10:58.402 END TEST raid_superblock_test 00:10:58.402 ************************************ 00:10:58.402 10:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:58.402 00:10:58.402 real 0m5.578s 00:10:58.402 user 0m8.549s 00:10:58.402 sys 0m0.728s 00:10:58.402 10:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.402 10:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.402 10:39:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:58.402 10:39:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:58.402 10:39:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.402 10:39:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.402 ************************************ 00:10:58.402 START TEST raid_read_error_test 00:10:58.402 ************************************ 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xrIVz5v19x 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65548 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65548 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65548 ']' 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:58.402 10:39:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.661 [2024-11-15 10:39:29.001848] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:10:58.661 [2024-11-15 10:39:29.002026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65548 ] 00:10:58.661 [2024-11-15 10:39:29.207256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.919 [2024-11-15 10:39:29.309307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.177 [2024-11-15 10:39:29.488309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.177 [2024-11-15 10:39:29.488376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 BaseBdev1_malloc 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 true 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 [2024-11-15 10:39:30.109597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:59.744 [2024-11-15 10:39:30.109673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.744 [2024-11-15 10:39:30.109704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:59.744 [2024-11-15 10:39:30.109722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.744 [2024-11-15 10:39:30.112374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.744 [2024-11-15 10:39:30.112420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:59.744 BaseBdev1 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 BaseBdev2_malloc 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 true 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 [2024-11-15 10:39:30.161096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:59.744 [2024-11-15 10:39:30.161169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.744 [2024-11-15 10:39:30.161195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:59.744 [2024-11-15 10:39:30.161211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.744 [2024-11-15 10:39:30.163811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.744 [2024-11-15 10:39:30.163862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:59.744 BaseBdev2 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 BaseBdev3_malloc 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 true 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 [2024-11-15 10:39:30.224736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:59.744 [2024-11-15 10:39:30.224808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.744 [2024-11-15 10:39:30.224836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:59.744 [2024-11-15 10:39:30.224853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.744 [2024-11-15 10:39:30.227473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.744 [2024-11-15 10:39:30.227651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:59.744 BaseBdev3 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:59.744 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.745 [2024-11-15 10:39:30.236837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.745 [2024-11-15 10:39:30.239183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.745 [2024-11-15 10:39:30.239442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.745 [2024-11-15 10:39:30.239848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:59.745 [2024-11-15 10:39:30.239982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:59.745 [2024-11-15 10:39:30.240343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:59.745 [2024-11-15 10:39:30.240697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:59.745 [2024-11-15 10:39:30.240835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:59.745 [2024-11-15 10:39:30.241192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.745 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.003 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.003 "name": "raid_bdev1", 00:11:00.003 "uuid": "a134312d-2929-471a-81b1-430367cbc9ee", 00:11:00.003 "strip_size_kb": 64, 00:11:00.003 "state": "online", 00:11:00.003 "raid_level": "raid0", 00:11:00.003 "superblock": true, 00:11:00.003 "num_base_bdevs": 3, 00:11:00.003 "num_base_bdevs_discovered": 3, 00:11:00.003 "num_base_bdevs_operational": 3, 00:11:00.003 "base_bdevs_list": [ 00:11:00.003 { 00:11:00.003 "name": "BaseBdev1", 00:11:00.003 "uuid": "2d81bd6f-6c51-5f8e-bc4f-367400826f45", 00:11:00.003 "is_configured": true, 00:11:00.003 "data_offset": 2048, 00:11:00.003 "data_size": 63488 00:11:00.003 }, 00:11:00.003 { 00:11:00.003 "name": "BaseBdev2", 00:11:00.003 "uuid": "5298c2fb-9f54-56d9-8db5-f850b102e9d1", 00:11:00.003 "is_configured": true, 00:11:00.003 "data_offset": 2048, 00:11:00.003 "data_size": 63488 00:11:00.003 }, 00:11:00.003 { 00:11:00.003 "name": "BaseBdev3", 00:11:00.003 "uuid": "064f2b53-6f5f-5a13-b239-f5a596f07335", 00:11:00.003 "is_configured": true, 00:11:00.003 "data_offset": 2048, 00:11:00.003 "data_size": 63488 00:11:00.003 } 00:11:00.003 ] 00:11:00.003 }' 00:11:00.003 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.003 10:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.261 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:00.261 10:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:00.519 [2024-11-15 10:39:30.846605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:01.453 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:01.453 10:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.453 10:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.453 10:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.453 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:01.453 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:01.453 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:01.453 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:01.453 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.453 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.454 "name": "raid_bdev1", 00:11:01.454 "uuid": "a134312d-2929-471a-81b1-430367cbc9ee", 00:11:01.454 "strip_size_kb": 64, 00:11:01.454 "state": "online", 00:11:01.454 "raid_level": "raid0", 00:11:01.454 "superblock": true, 00:11:01.454 "num_base_bdevs": 3, 00:11:01.454 "num_base_bdevs_discovered": 3, 00:11:01.454 "num_base_bdevs_operational": 3, 00:11:01.454 "base_bdevs_list": [ 00:11:01.454 { 00:11:01.454 "name": "BaseBdev1", 00:11:01.454 "uuid": "2d81bd6f-6c51-5f8e-bc4f-367400826f45", 00:11:01.454 "is_configured": true, 00:11:01.454 "data_offset": 2048, 00:11:01.454 "data_size": 63488 00:11:01.454 }, 00:11:01.454 { 00:11:01.454 "name": "BaseBdev2", 00:11:01.454 "uuid": "5298c2fb-9f54-56d9-8db5-f850b102e9d1", 00:11:01.454 "is_configured": true, 00:11:01.454 "data_offset": 2048, 00:11:01.454 "data_size": 63488 00:11:01.454 }, 00:11:01.454 { 00:11:01.454 "name": "BaseBdev3", 00:11:01.454 "uuid": "064f2b53-6f5f-5a13-b239-f5a596f07335", 00:11:01.454 "is_configured": true, 00:11:01.454 "data_offset": 2048, 00:11:01.454 "data_size": 63488 00:11:01.454 } 00:11:01.454 ] 00:11:01.454 }' 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.454 10:39:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.712 10:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.712 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.712 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.712 [2024-11-15 10:39:32.195792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.712 [2024-11-15 10:39:32.195826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.712 { 00:11:01.712 "results": [ 00:11:01.712 { 00:11:01.712 "job": "raid_bdev1", 00:11:01.712 "core_mask": "0x1", 00:11:01.712 "workload": "randrw", 00:11:01.712 "percentage": 50, 00:11:01.712 "status": "finished", 00:11:01.712 "queue_depth": 1, 00:11:01.712 "io_size": 131072, 00:11:01.712 "runtime": 1.346826, 00:11:01.712 "iops": 11184.81526195663, 00:11:01.712 "mibps": 1398.1019077445787, 00:11:01.712 "io_failed": 1, 00:11:01.712 "io_timeout": 0, 00:11:01.712 "avg_latency_us": 122.25370256162688, 00:11:01.712 "min_latency_us": 28.625454545454545, 00:11:01.712 "max_latency_us": 1891.6072727272726 00:11:01.712 } 00:11:01.712 ], 00:11:01.712 "core_count": 1 00:11:01.712 } 00:11:01.712 [2024-11-15 10:39:32.199281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.712 [2024-11-15 10:39:32.199337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.712 [2024-11-15 10:39:32.199406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.712 [2024-11-15 10:39:32.199422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:01.712 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.712 10:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65548 00:11:01.712 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65548 ']' 00:11:01.712 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65548 00:11:01.712 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:01.712 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:01.712 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65548 00:11:01.712 killing process with pid 65548 00:11:01.713 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:01.713 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:01.713 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65548' 00:11:01.713 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65548 00:11:01.713 [2024-11-15 10:39:32.240623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.713 10:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65548 00:11:01.971 [2024-11-15 10:39:32.433517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.346 10:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xrIVz5v19x 00:11:03.346 10:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:03.346 10:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:03.346 10:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:03.346 10:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:03.346 10:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.346 10:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:03.346 10:39:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:03.346 00:11:03.346 real 0m4.607s 00:11:03.346 user 0m5.823s 00:11:03.346 sys 0m0.476s 00:11:03.346 10:39:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:03.346 10:39:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.346 ************************************ 00:11:03.346 END TEST raid_read_error_test 00:11:03.346 ************************************ 00:11:03.346 10:39:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:11:03.346 10:39:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:03.346 10:39:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:03.346 10:39:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.346 ************************************ 00:11:03.346 START TEST raid_write_error_test 00:11:03.346 ************************************ 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SMKFP4PXmB 00:11:03.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65689 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65689 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65689 ']' 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:03.346 10:39:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.347 [2024-11-15 10:39:33.692245] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:11:03.347 [2024-11-15 10:39:33.692488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65689 ] 00:11:03.347 [2024-11-15 10:39:33.877006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.621 [2024-11-15 10:39:33.983262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.621 [2024-11-15 10:39:34.169734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.621 [2024-11-15 10:39:34.170009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.188 BaseBdev1_malloc 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.188 true 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.188 [2024-11-15 10:39:34.728950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:04.188 [2024-11-15 10:39:34.729168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.188 [2024-11-15 10:39:34.729210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:04.188 [2024-11-15 10:39:34.729228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.188 [2024-11-15 10:39:34.731879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.188 [2024-11-15 10:39:34.731932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.188 BaseBdev1 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.188 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.446 BaseBdev2_malloc 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.446 true 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.446 [2024-11-15 10:39:34.780794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:04.446 [2024-11-15 10:39:34.780866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.446 [2024-11-15 10:39:34.780893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:04.446 [2024-11-15 10:39:34.780910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.446 [2024-11-15 10:39:34.783575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.446 [2024-11-15 10:39:34.783630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:04.446 BaseBdev2 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.446 BaseBdev3_malloc 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.446 true 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.446 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.446 [2024-11-15 10:39:34.852037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:04.446 [2024-11-15 10:39:34.852117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.447 [2024-11-15 10:39:34.852150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:04.447 [2024-11-15 10:39:34.852170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.447 [2024-11-15 10:39:34.855383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.447 [2024-11-15 10:39:34.855441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:04.447 BaseBdev3 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.447 [2024-11-15 10:39:34.860277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.447 [2024-11-15 10:39:34.863155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.447 [2024-11-15 10:39:34.863456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.447 [2024-11-15 10:39:34.863918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:04.447 [2024-11-15 10:39:34.864069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:04.447 [2024-11-15 10:39:34.864515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:04.447 [2024-11-15 10:39:34.864927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:04.447 [2024-11-15 10:39:34.865088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:04.447 [2024-11-15 10:39:34.865579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.447 "name": "raid_bdev1", 00:11:04.447 "uuid": "515bb019-f034-494d-a616-9987ad925b69", 00:11:04.447 "strip_size_kb": 64, 00:11:04.447 "state": "online", 00:11:04.447 "raid_level": "raid0", 00:11:04.447 "superblock": true, 00:11:04.447 "num_base_bdevs": 3, 00:11:04.447 "num_base_bdevs_discovered": 3, 00:11:04.447 "num_base_bdevs_operational": 3, 00:11:04.447 "base_bdevs_list": [ 00:11:04.447 { 00:11:04.447 "name": "BaseBdev1", 00:11:04.447 "uuid": "a436625a-817e-5de5-8ce7-bad400e24f7b", 00:11:04.447 "is_configured": true, 00:11:04.447 "data_offset": 2048, 00:11:04.447 "data_size": 63488 00:11:04.447 }, 00:11:04.447 { 00:11:04.447 "name": "BaseBdev2", 00:11:04.447 "uuid": "52803056-2014-51af-b6ac-1c6399e8ae69", 00:11:04.447 "is_configured": true, 00:11:04.447 "data_offset": 2048, 00:11:04.447 "data_size": 63488 00:11:04.447 }, 00:11:04.447 { 00:11:04.447 "name": "BaseBdev3", 00:11:04.447 "uuid": "9ca58053-69e2-597b-93ca-99297279dc96", 00:11:04.447 "is_configured": true, 00:11:04.447 "data_offset": 2048, 00:11:04.447 "data_size": 63488 00:11:04.447 } 00:11:04.447 ] 00:11:04.447 }' 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.447 10:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.013 10:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:05.013 10:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:05.013 [2024-11-15 10:39:35.510938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.947 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.947 "name": "raid_bdev1", 00:11:05.947 "uuid": "515bb019-f034-494d-a616-9987ad925b69", 00:11:05.947 "strip_size_kb": 64, 00:11:05.947 "state": "online", 00:11:05.947 "raid_level": "raid0", 00:11:05.947 "superblock": true, 00:11:05.947 "num_base_bdevs": 3, 00:11:05.947 "num_base_bdevs_discovered": 3, 00:11:05.947 "num_base_bdevs_operational": 3, 00:11:05.947 "base_bdevs_list": [ 00:11:05.947 { 00:11:05.947 "name": "BaseBdev1", 00:11:05.947 "uuid": "a436625a-817e-5de5-8ce7-bad400e24f7b", 00:11:05.947 "is_configured": true, 00:11:05.947 "data_offset": 2048, 00:11:05.947 "data_size": 63488 00:11:05.947 }, 00:11:05.947 { 00:11:05.947 "name": "BaseBdev2", 00:11:05.947 "uuid": "52803056-2014-51af-b6ac-1c6399e8ae69", 00:11:05.947 "is_configured": true, 00:11:05.947 "data_offset": 2048, 00:11:05.947 "data_size": 63488 00:11:05.947 }, 00:11:05.947 { 00:11:05.947 "name": "BaseBdev3", 00:11:05.947 "uuid": "9ca58053-69e2-597b-93ca-99297279dc96", 00:11:05.947 "is_configured": true, 00:11:05.947 "data_offset": 2048, 00:11:05.947 "data_size": 63488 00:11:05.948 } 00:11:05.948 ] 00:11:05.948 }' 00:11:05.948 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.948 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.515 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.515 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.515 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.515 [2024-11-15 10:39:36.928797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.515 [2024-11-15 10:39:36.928834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.515 [2024-11-15 10:39:36.932333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.515 [2024-11-15 10:39:36.932403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.515 [2024-11-15 10:39:36.932457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.515 [2024-11-15 10:39:36.932473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:06.515 { 00:11:06.515 "results": [ 00:11:06.515 { 00:11:06.515 "job": "raid_bdev1", 00:11:06.515 "core_mask": "0x1", 00:11:06.515 "workload": "randrw", 00:11:06.515 "percentage": 50, 00:11:06.515 "status": "finished", 00:11:06.515 "queue_depth": 1, 00:11:06.515 "io_size": 131072, 00:11:06.515 "runtime": 1.415567, 00:11:06.515 "iops": 11321.964979404012, 00:11:06.515 "mibps": 1415.2456224255016, 00:11:06.515 "io_failed": 1, 00:11:06.515 "io_timeout": 0, 00:11:06.515 "avg_latency_us": 121.19925040270434, 00:11:06.515 "min_latency_us": 26.88, 00:11:06.515 "max_latency_us": 1899.0545454545454 00:11:06.515 } 00:11:06.515 ], 00:11:06.515 "core_count": 1 00:11:06.515 } 00:11:06.515 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.515 10:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65689 00:11:06.515 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65689 ']' 00:11:06.516 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65689 00:11:06.516 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:06.516 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.516 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65689 00:11:06.516 killing process with pid 65689 00:11:06.516 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:06.516 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:06.516 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65689' 00:11:06.516 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65689 00:11:06.516 [2024-11-15 10:39:36.965648] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.516 10:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65689 00:11:06.774 [2024-11-15 10:39:37.160418] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.710 10:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:07.710 10:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:07.710 10:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SMKFP4PXmB 00:11:07.710 10:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:07.710 10:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:07.710 10:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.710 10:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.710 10:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:07.710 00:11:07.710 real 0m4.660s 00:11:07.710 user 0m5.881s 00:11:07.710 sys 0m0.529s 00:11:07.710 10:39:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.710 10:39:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.710 ************************************ 00:11:07.710 END TEST raid_write_error_test 00:11:07.710 ************************************ 00:11:07.710 10:39:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:07.710 10:39:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:07.710 10:39:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:07.710 10:39:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:07.710 10:39:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.710 ************************************ 00:11:07.710 START TEST raid_state_function_test 00:11:07.710 ************************************ 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:07.710 Process raid pid: 65833 00:11:07.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.710 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65833 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65833' 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65833 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65833 ']' 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:07.711 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.969 [2024-11-15 10:39:38.367771] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:11:07.969 [2024-11-15 10:39:38.368140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.227 [2024-11-15 10:39:38.553901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.227 [2024-11-15 10:39:38.681845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.485 [2024-11-15 10:39:38.907139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.485 [2024-11-15 10:39:38.907400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.055 [2024-11-15 10:39:39.329769] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.055 [2024-11-15 10:39:39.329986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.055 [2024-11-15 10:39:39.330119] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.055 [2024-11-15 10:39:39.330183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.055 [2024-11-15 10:39:39.330201] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.055 [2024-11-15 10:39:39.330217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.055 "name": "Existed_Raid", 00:11:09.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.055 "strip_size_kb": 64, 00:11:09.055 "state": "configuring", 00:11:09.055 "raid_level": "concat", 00:11:09.055 "superblock": false, 00:11:09.055 "num_base_bdevs": 3, 00:11:09.055 "num_base_bdevs_discovered": 0, 00:11:09.055 "num_base_bdevs_operational": 3, 00:11:09.055 "base_bdevs_list": [ 00:11:09.055 { 00:11:09.055 "name": "BaseBdev1", 00:11:09.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.055 "is_configured": false, 00:11:09.055 "data_offset": 0, 00:11:09.055 "data_size": 0 00:11:09.055 }, 00:11:09.055 { 00:11:09.055 "name": "BaseBdev2", 00:11:09.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.055 "is_configured": false, 00:11:09.055 "data_offset": 0, 00:11:09.055 "data_size": 0 00:11:09.055 }, 00:11:09.055 { 00:11:09.055 "name": "BaseBdev3", 00:11:09.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.055 "is_configured": false, 00:11:09.055 "data_offset": 0, 00:11:09.055 "data_size": 0 00:11:09.055 } 00:11:09.055 ] 00:11:09.055 }' 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.055 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 [2024-11-15 10:39:39.849822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.315 [2024-11-15 10:39:39.849874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 [2024-11-15 10:39:39.857814] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.315 [2024-11-15 10:39:39.857873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.315 [2024-11-15 10:39:39.857889] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.315 [2024-11-15 10:39:39.857905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.315 [2024-11-15 10:39:39.857915] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.315 [2024-11-15 10:39:39.857929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.315 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.573 BaseBdev1 00:11:09.573 [2024-11-15 10:39:39.902624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.573 [ 00:11:09.573 { 00:11:09.573 "name": "BaseBdev1", 00:11:09.573 "aliases": [ 00:11:09.573 "4fcb2465-d317-485d-bc78-363ac4d33276" 00:11:09.573 ], 00:11:09.573 "product_name": "Malloc disk", 00:11:09.573 "block_size": 512, 00:11:09.573 "num_blocks": 65536, 00:11:09.573 "uuid": "4fcb2465-d317-485d-bc78-363ac4d33276", 00:11:09.573 "assigned_rate_limits": { 00:11:09.573 "rw_ios_per_sec": 0, 00:11:09.573 "rw_mbytes_per_sec": 0, 00:11:09.573 "r_mbytes_per_sec": 0, 00:11:09.573 "w_mbytes_per_sec": 0 00:11:09.573 }, 00:11:09.573 "claimed": true, 00:11:09.573 "claim_type": "exclusive_write", 00:11:09.573 "zoned": false, 00:11:09.573 "supported_io_types": { 00:11:09.573 "read": true, 00:11:09.573 "write": true, 00:11:09.573 "unmap": true, 00:11:09.573 "flush": true, 00:11:09.573 "reset": true, 00:11:09.573 "nvme_admin": false, 00:11:09.573 "nvme_io": false, 00:11:09.573 "nvme_io_md": false, 00:11:09.573 "write_zeroes": true, 00:11:09.573 "zcopy": true, 00:11:09.573 "get_zone_info": false, 00:11:09.573 "zone_management": false, 00:11:09.573 "zone_append": false, 00:11:09.573 "compare": false, 00:11:09.573 "compare_and_write": false, 00:11:09.573 "abort": true, 00:11:09.573 "seek_hole": false, 00:11:09.573 "seek_data": false, 00:11:09.573 "copy": true, 00:11:09.573 "nvme_iov_md": false 00:11:09.573 }, 00:11:09.573 "memory_domains": [ 00:11:09.573 { 00:11:09.573 "dma_device_id": "system", 00:11:09.573 "dma_device_type": 1 00:11:09.573 }, 00:11:09.573 { 00:11:09.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.573 "dma_device_type": 2 00:11:09.573 } 00:11:09.573 ], 00:11:09.573 "driver_specific": {} 00:11:09.573 } 00:11:09.573 ] 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.573 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.574 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.574 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.574 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.574 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.574 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.574 "name": "Existed_Raid", 00:11:09.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.574 "strip_size_kb": 64, 00:11:09.574 "state": "configuring", 00:11:09.574 "raid_level": "concat", 00:11:09.574 "superblock": false, 00:11:09.574 "num_base_bdevs": 3, 00:11:09.574 "num_base_bdevs_discovered": 1, 00:11:09.574 "num_base_bdevs_operational": 3, 00:11:09.574 "base_bdevs_list": [ 00:11:09.574 { 00:11:09.574 "name": "BaseBdev1", 00:11:09.574 "uuid": "4fcb2465-d317-485d-bc78-363ac4d33276", 00:11:09.574 "is_configured": true, 00:11:09.574 "data_offset": 0, 00:11:09.574 "data_size": 65536 00:11:09.574 }, 00:11:09.574 { 00:11:09.574 "name": "BaseBdev2", 00:11:09.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.574 "is_configured": false, 00:11:09.574 "data_offset": 0, 00:11:09.574 "data_size": 0 00:11:09.574 }, 00:11:09.574 { 00:11:09.574 "name": "BaseBdev3", 00:11:09.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.574 "is_configured": false, 00:11:09.574 "data_offset": 0, 00:11:09.574 "data_size": 0 00:11:09.574 } 00:11:09.574 ] 00:11:09.574 }' 00:11:09.574 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.574 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.139 [2024-11-15 10:39:40.478834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.139 [2024-11-15 10:39:40.478901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.139 [2024-11-15 10:39:40.486875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.139 [2024-11-15 10:39:40.489160] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.139 [2024-11-15 10:39:40.489220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.139 [2024-11-15 10:39:40.489238] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.139 [2024-11-15 10:39:40.489254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.139 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.139 "name": "Existed_Raid", 00:11:10.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.139 "strip_size_kb": 64, 00:11:10.139 "state": "configuring", 00:11:10.139 "raid_level": "concat", 00:11:10.139 "superblock": false, 00:11:10.139 "num_base_bdevs": 3, 00:11:10.139 "num_base_bdevs_discovered": 1, 00:11:10.139 "num_base_bdevs_operational": 3, 00:11:10.139 "base_bdevs_list": [ 00:11:10.139 { 00:11:10.140 "name": "BaseBdev1", 00:11:10.140 "uuid": "4fcb2465-d317-485d-bc78-363ac4d33276", 00:11:10.140 "is_configured": true, 00:11:10.140 "data_offset": 0, 00:11:10.140 "data_size": 65536 00:11:10.140 }, 00:11:10.140 { 00:11:10.140 "name": "BaseBdev2", 00:11:10.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.140 "is_configured": false, 00:11:10.140 "data_offset": 0, 00:11:10.140 "data_size": 0 00:11:10.140 }, 00:11:10.140 { 00:11:10.140 "name": "BaseBdev3", 00:11:10.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.140 "is_configured": false, 00:11:10.140 "data_offset": 0, 00:11:10.140 "data_size": 0 00:11:10.140 } 00:11:10.140 ] 00:11:10.140 }' 00:11:10.140 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.140 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.706 [2024-11-15 10:39:41.073688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.706 BaseBdev2 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.706 [ 00:11:10.706 { 00:11:10.706 "name": "BaseBdev2", 00:11:10.706 "aliases": [ 00:11:10.706 "6ec4ebd7-70dd-4890-bed5-1d40a1fb8097" 00:11:10.706 ], 00:11:10.706 "product_name": "Malloc disk", 00:11:10.706 "block_size": 512, 00:11:10.706 "num_blocks": 65536, 00:11:10.706 "uuid": "6ec4ebd7-70dd-4890-bed5-1d40a1fb8097", 00:11:10.706 "assigned_rate_limits": { 00:11:10.706 "rw_ios_per_sec": 0, 00:11:10.706 "rw_mbytes_per_sec": 0, 00:11:10.706 "r_mbytes_per_sec": 0, 00:11:10.706 "w_mbytes_per_sec": 0 00:11:10.706 }, 00:11:10.706 "claimed": true, 00:11:10.706 "claim_type": "exclusive_write", 00:11:10.706 "zoned": false, 00:11:10.706 "supported_io_types": { 00:11:10.706 "read": true, 00:11:10.706 "write": true, 00:11:10.706 "unmap": true, 00:11:10.706 "flush": true, 00:11:10.706 "reset": true, 00:11:10.706 "nvme_admin": false, 00:11:10.706 "nvme_io": false, 00:11:10.706 "nvme_io_md": false, 00:11:10.706 "write_zeroes": true, 00:11:10.706 "zcopy": true, 00:11:10.706 "get_zone_info": false, 00:11:10.706 "zone_management": false, 00:11:10.706 "zone_append": false, 00:11:10.706 "compare": false, 00:11:10.706 "compare_and_write": false, 00:11:10.706 "abort": true, 00:11:10.706 "seek_hole": false, 00:11:10.706 "seek_data": false, 00:11:10.706 "copy": true, 00:11:10.706 "nvme_iov_md": false 00:11:10.706 }, 00:11:10.706 "memory_domains": [ 00:11:10.706 { 00:11:10.706 "dma_device_id": "system", 00:11:10.706 "dma_device_type": 1 00:11:10.706 }, 00:11:10.706 { 00:11:10.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.706 "dma_device_type": 2 00:11:10.706 } 00:11:10.706 ], 00:11:10.706 "driver_specific": {} 00:11:10.706 } 00:11:10.706 ] 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.706 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.707 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.707 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.707 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.707 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.707 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.707 "name": "Existed_Raid", 00:11:10.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.707 "strip_size_kb": 64, 00:11:10.707 "state": "configuring", 00:11:10.707 "raid_level": "concat", 00:11:10.707 "superblock": false, 00:11:10.707 "num_base_bdevs": 3, 00:11:10.707 "num_base_bdevs_discovered": 2, 00:11:10.707 "num_base_bdevs_operational": 3, 00:11:10.707 "base_bdevs_list": [ 00:11:10.707 { 00:11:10.707 "name": "BaseBdev1", 00:11:10.707 "uuid": "4fcb2465-d317-485d-bc78-363ac4d33276", 00:11:10.707 "is_configured": true, 00:11:10.707 "data_offset": 0, 00:11:10.707 "data_size": 65536 00:11:10.707 }, 00:11:10.707 { 00:11:10.707 "name": "BaseBdev2", 00:11:10.707 "uuid": "6ec4ebd7-70dd-4890-bed5-1d40a1fb8097", 00:11:10.707 "is_configured": true, 00:11:10.707 "data_offset": 0, 00:11:10.707 "data_size": 65536 00:11:10.707 }, 00:11:10.707 { 00:11:10.707 "name": "BaseBdev3", 00:11:10.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.707 "is_configured": false, 00:11:10.707 "data_offset": 0, 00:11:10.707 "data_size": 0 00:11:10.707 } 00:11:10.707 ] 00:11:10.707 }' 00:11:10.707 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.707 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.274 [2024-11-15 10:39:41.648067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.274 [2024-11-15 10:39:41.648130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:11.274 [2024-11-15 10:39:41.648149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:11.274 [2024-11-15 10:39:41.648502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:11.274 [2024-11-15 10:39:41.648742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:11.274 [2024-11-15 10:39:41.648778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:11.274 [2024-11-15 10:39:41.649089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.274 BaseBdev3 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.274 [ 00:11:11.274 { 00:11:11.274 "name": "BaseBdev3", 00:11:11.274 "aliases": [ 00:11:11.274 "2a9cdad8-a8df-4c8f-bfe9-c1a423644680" 00:11:11.274 ], 00:11:11.274 "product_name": "Malloc disk", 00:11:11.274 "block_size": 512, 00:11:11.274 "num_blocks": 65536, 00:11:11.274 "uuid": "2a9cdad8-a8df-4c8f-bfe9-c1a423644680", 00:11:11.274 "assigned_rate_limits": { 00:11:11.274 "rw_ios_per_sec": 0, 00:11:11.274 "rw_mbytes_per_sec": 0, 00:11:11.274 "r_mbytes_per_sec": 0, 00:11:11.274 "w_mbytes_per_sec": 0 00:11:11.274 }, 00:11:11.274 "claimed": true, 00:11:11.274 "claim_type": "exclusive_write", 00:11:11.274 "zoned": false, 00:11:11.274 "supported_io_types": { 00:11:11.274 "read": true, 00:11:11.274 "write": true, 00:11:11.274 "unmap": true, 00:11:11.274 "flush": true, 00:11:11.274 "reset": true, 00:11:11.274 "nvme_admin": false, 00:11:11.274 "nvme_io": false, 00:11:11.274 "nvme_io_md": false, 00:11:11.274 "write_zeroes": true, 00:11:11.274 "zcopy": true, 00:11:11.274 "get_zone_info": false, 00:11:11.274 "zone_management": false, 00:11:11.274 "zone_append": false, 00:11:11.274 "compare": false, 00:11:11.274 "compare_and_write": false, 00:11:11.274 "abort": true, 00:11:11.274 "seek_hole": false, 00:11:11.274 "seek_data": false, 00:11:11.274 "copy": true, 00:11:11.274 "nvme_iov_md": false 00:11:11.274 }, 00:11:11.274 "memory_domains": [ 00:11:11.274 { 00:11:11.274 "dma_device_id": "system", 00:11:11.274 "dma_device_type": 1 00:11:11.274 }, 00:11:11.274 { 00:11:11.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.274 "dma_device_type": 2 00:11:11.274 } 00:11:11.274 ], 00:11:11.274 "driver_specific": {} 00:11:11.274 } 00:11:11.274 ] 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.274 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.275 "name": "Existed_Raid", 00:11:11.275 "uuid": "65c46c8d-27f9-4dc8-a2d3-c946e194a21c", 00:11:11.275 "strip_size_kb": 64, 00:11:11.275 "state": "online", 00:11:11.275 "raid_level": "concat", 00:11:11.275 "superblock": false, 00:11:11.275 "num_base_bdevs": 3, 00:11:11.275 "num_base_bdevs_discovered": 3, 00:11:11.275 "num_base_bdevs_operational": 3, 00:11:11.275 "base_bdevs_list": [ 00:11:11.275 { 00:11:11.275 "name": "BaseBdev1", 00:11:11.275 "uuid": "4fcb2465-d317-485d-bc78-363ac4d33276", 00:11:11.275 "is_configured": true, 00:11:11.275 "data_offset": 0, 00:11:11.275 "data_size": 65536 00:11:11.275 }, 00:11:11.275 { 00:11:11.275 "name": "BaseBdev2", 00:11:11.275 "uuid": "6ec4ebd7-70dd-4890-bed5-1d40a1fb8097", 00:11:11.275 "is_configured": true, 00:11:11.275 "data_offset": 0, 00:11:11.275 "data_size": 65536 00:11:11.275 }, 00:11:11.275 { 00:11:11.275 "name": "BaseBdev3", 00:11:11.275 "uuid": "2a9cdad8-a8df-4c8f-bfe9-c1a423644680", 00:11:11.275 "is_configured": true, 00:11:11.275 "data_offset": 0, 00:11:11.275 "data_size": 65536 00:11:11.275 } 00:11:11.275 ] 00:11:11.275 }' 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.275 10:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.842 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.842 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:11.842 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.843 [2024-11-15 10:39:42.200656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.843 "name": "Existed_Raid", 00:11:11.843 "aliases": [ 00:11:11.843 "65c46c8d-27f9-4dc8-a2d3-c946e194a21c" 00:11:11.843 ], 00:11:11.843 "product_name": "Raid Volume", 00:11:11.843 "block_size": 512, 00:11:11.843 "num_blocks": 196608, 00:11:11.843 "uuid": "65c46c8d-27f9-4dc8-a2d3-c946e194a21c", 00:11:11.843 "assigned_rate_limits": { 00:11:11.843 "rw_ios_per_sec": 0, 00:11:11.843 "rw_mbytes_per_sec": 0, 00:11:11.843 "r_mbytes_per_sec": 0, 00:11:11.843 "w_mbytes_per_sec": 0 00:11:11.843 }, 00:11:11.843 "claimed": false, 00:11:11.843 "zoned": false, 00:11:11.843 "supported_io_types": { 00:11:11.843 "read": true, 00:11:11.843 "write": true, 00:11:11.843 "unmap": true, 00:11:11.843 "flush": true, 00:11:11.843 "reset": true, 00:11:11.843 "nvme_admin": false, 00:11:11.843 "nvme_io": false, 00:11:11.843 "nvme_io_md": false, 00:11:11.843 "write_zeroes": true, 00:11:11.843 "zcopy": false, 00:11:11.843 "get_zone_info": false, 00:11:11.843 "zone_management": false, 00:11:11.843 "zone_append": false, 00:11:11.843 "compare": false, 00:11:11.843 "compare_and_write": false, 00:11:11.843 "abort": false, 00:11:11.843 "seek_hole": false, 00:11:11.843 "seek_data": false, 00:11:11.843 "copy": false, 00:11:11.843 "nvme_iov_md": false 00:11:11.843 }, 00:11:11.843 "memory_domains": [ 00:11:11.843 { 00:11:11.843 "dma_device_id": "system", 00:11:11.843 "dma_device_type": 1 00:11:11.843 }, 00:11:11.843 { 00:11:11.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.843 "dma_device_type": 2 00:11:11.843 }, 00:11:11.843 { 00:11:11.843 "dma_device_id": "system", 00:11:11.843 "dma_device_type": 1 00:11:11.843 }, 00:11:11.843 { 00:11:11.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.843 "dma_device_type": 2 00:11:11.843 }, 00:11:11.843 { 00:11:11.843 "dma_device_id": "system", 00:11:11.843 "dma_device_type": 1 00:11:11.843 }, 00:11:11.843 { 00:11:11.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.843 "dma_device_type": 2 00:11:11.843 } 00:11:11.843 ], 00:11:11.843 "driver_specific": { 00:11:11.843 "raid": { 00:11:11.843 "uuid": "65c46c8d-27f9-4dc8-a2d3-c946e194a21c", 00:11:11.843 "strip_size_kb": 64, 00:11:11.843 "state": "online", 00:11:11.843 "raid_level": "concat", 00:11:11.843 "superblock": false, 00:11:11.843 "num_base_bdevs": 3, 00:11:11.843 "num_base_bdevs_discovered": 3, 00:11:11.843 "num_base_bdevs_operational": 3, 00:11:11.843 "base_bdevs_list": [ 00:11:11.843 { 00:11:11.843 "name": "BaseBdev1", 00:11:11.843 "uuid": "4fcb2465-d317-485d-bc78-363ac4d33276", 00:11:11.843 "is_configured": true, 00:11:11.843 "data_offset": 0, 00:11:11.843 "data_size": 65536 00:11:11.843 }, 00:11:11.843 { 00:11:11.843 "name": "BaseBdev2", 00:11:11.843 "uuid": "6ec4ebd7-70dd-4890-bed5-1d40a1fb8097", 00:11:11.843 "is_configured": true, 00:11:11.843 "data_offset": 0, 00:11:11.843 "data_size": 65536 00:11:11.843 }, 00:11:11.843 { 00:11:11.843 "name": "BaseBdev3", 00:11:11.843 "uuid": "2a9cdad8-a8df-4c8f-bfe9-c1a423644680", 00:11:11.843 "is_configured": true, 00:11:11.843 "data_offset": 0, 00:11:11.843 "data_size": 65536 00:11:11.843 } 00:11:11.843 ] 00:11:11.843 } 00:11:11.843 } 00:11:11.843 }' 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:11.843 BaseBdev2 00:11:11.843 BaseBdev3' 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.843 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.102 [2024-11-15 10:39:42.528421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.102 [2024-11-15 10:39:42.528460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.102 [2024-11-15 10:39:42.528531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.102 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.361 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.361 "name": "Existed_Raid", 00:11:12.361 "uuid": "65c46c8d-27f9-4dc8-a2d3-c946e194a21c", 00:11:12.361 "strip_size_kb": 64, 00:11:12.361 "state": "offline", 00:11:12.361 "raid_level": "concat", 00:11:12.361 "superblock": false, 00:11:12.361 "num_base_bdevs": 3, 00:11:12.361 "num_base_bdevs_discovered": 2, 00:11:12.361 "num_base_bdevs_operational": 2, 00:11:12.361 "base_bdevs_list": [ 00:11:12.361 { 00:11:12.361 "name": null, 00:11:12.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.361 "is_configured": false, 00:11:12.361 "data_offset": 0, 00:11:12.361 "data_size": 65536 00:11:12.361 }, 00:11:12.361 { 00:11:12.361 "name": "BaseBdev2", 00:11:12.361 "uuid": "6ec4ebd7-70dd-4890-bed5-1d40a1fb8097", 00:11:12.361 "is_configured": true, 00:11:12.361 "data_offset": 0, 00:11:12.361 "data_size": 65536 00:11:12.361 }, 00:11:12.361 { 00:11:12.361 "name": "BaseBdev3", 00:11:12.361 "uuid": "2a9cdad8-a8df-4c8f-bfe9-c1a423644680", 00:11:12.361 "is_configured": true, 00:11:12.361 "data_offset": 0, 00:11:12.361 "data_size": 65536 00:11:12.361 } 00:11:12.361 ] 00:11:12.361 }' 00:11:12.361 10:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.361 10:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.620 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.620 [2024-11-15 10:39:43.148539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:12.879 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.880 [2024-11-15 10:39:43.296978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:12.880 [2024-11-15 10:39:43.297052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.880 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.139 BaseBdev2 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.139 [ 00:11:13.139 { 00:11:13.139 "name": "BaseBdev2", 00:11:13.139 "aliases": [ 00:11:13.139 "6fbcd050-27b2-4c4b-8356-1e27b671f373" 00:11:13.139 ], 00:11:13.139 "product_name": "Malloc disk", 00:11:13.139 "block_size": 512, 00:11:13.139 "num_blocks": 65536, 00:11:13.139 "uuid": "6fbcd050-27b2-4c4b-8356-1e27b671f373", 00:11:13.139 "assigned_rate_limits": { 00:11:13.139 "rw_ios_per_sec": 0, 00:11:13.139 "rw_mbytes_per_sec": 0, 00:11:13.139 "r_mbytes_per_sec": 0, 00:11:13.139 "w_mbytes_per_sec": 0 00:11:13.139 }, 00:11:13.139 "claimed": false, 00:11:13.139 "zoned": false, 00:11:13.139 "supported_io_types": { 00:11:13.139 "read": true, 00:11:13.139 "write": true, 00:11:13.139 "unmap": true, 00:11:13.139 "flush": true, 00:11:13.139 "reset": true, 00:11:13.139 "nvme_admin": false, 00:11:13.139 "nvme_io": false, 00:11:13.139 "nvme_io_md": false, 00:11:13.139 "write_zeroes": true, 00:11:13.139 "zcopy": true, 00:11:13.139 "get_zone_info": false, 00:11:13.139 "zone_management": false, 00:11:13.139 "zone_append": false, 00:11:13.139 "compare": false, 00:11:13.139 "compare_and_write": false, 00:11:13.139 "abort": true, 00:11:13.139 "seek_hole": false, 00:11:13.139 "seek_data": false, 00:11:13.139 "copy": true, 00:11:13.139 "nvme_iov_md": false 00:11:13.139 }, 00:11:13.139 "memory_domains": [ 00:11:13.139 { 00:11:13.139 "dma_device_id": "system", 00:11:13.139 "dma_device_type": 1 00:11:13.139 }, 00:11:13.139 { 00:11:13.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.139 "dma_device_type": 2 00:11:13.139 } 00:11:13.139 ], 00:11:13.139 "driver_specific": {} 00:11:13.139 } 00:11:13.139 ] 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.139 BaseBdev3 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:13.139 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.140 [ 00:11:13.140 { 00:11:13.140 "name": "BaseBdev3", 00:11:13.140 "aliases": [ 00:11:13.140 "50e1b4ae-9b4e-4c46-b051-3ab3c8402012" 00:11:13.140 ], 00:11:13.140 "product_name": "Malloc disk", 00:11:13.140 "block_size": 512, 00:11:13.140 "num_blocks": 65536, 00:11:13.140 "uuid": "50e1b4ae-9b4e-4c46-b051-3ab3c8402012", 00:11:13.140 "assigned_rate_limits": { 00:11:13.140 "rw_ios_per_sec": 0, 00:11:13.140 "rw_mbytes_per_sec": 0, 00:11:13.140 "r_mbytes_per_sec": 0, 00:11:13.140 "w_mbytes_per_sec": 0 00:11:13.140 }, 00:11:13.140 "claimed": false, 00:11:13.140 "zoned": false, 00:11:13.140 "supported_io_types": { 00:11:13.140 "read": true, 00:11:13.140 "write": true, 00:11:13.140 "unmap": true, 00:11:13.140 "flush": true, 00:11:13.140 "reset": true, 00:11:13.140 "nvme_admin": false, 00:11:13.140 "nvme_io": false, 00:11:13.140 "nvme_io_md": false, 00:11:13.140 "write_zeroes": true, 00:11:13.140 "zcopy": true, 00:11:13.140 "get_zone_info": false, 00:11:13.140 "zone_management": false, 00:11:13.140 "zone_append": false, 00:11:13.140 "compare": false, 00:11:13.140 "compare_and_write": false, 00:11:13.140 "abort": true, 00:11:13.140 "seek_hole": false, 00:11:13.140 "seek_data": false, 00:11:13.140 "copy": true, 00:11:13.140 "nvme_iov_md": false 00:11:13.140 }, 00:11:13.140 "memory_domains": [ 00:11:13.140 { 00:11:13.140 "dma_device_id": "system", 00:11:13.140 "dma_device_type": 1 00:11:13.140 }, 00:11:13.140 { 00:11:13.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.140 "dma_device_type": 2 00:11:13.140 } 00:11:13.140 ], 00:11:13.140 "driver_specific": {} 00:11:13.140 } 00:11:13.140 ] 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.140 [2024-11-15 10:39:43.582218] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.140 [2024-11-15 10:39:43.582277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.140 [2024-11-15 10:39:43.582309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.140 [2024-11-15 10:39:43.584584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.140 "name": "Existed_Raid", 00:11:13.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.140 "strip_size_kb": 64, 00:11:13.140 "state": "configuring", 00:11:13.140 "raid_level": "concat", 00:11:13.140 "superblock": false, 00:11:13.140 "num_base_bdevs": 3, 00:11:13.140 "num_base_bdevs_discovered": 2, 00:11:13.140 "num_base_bdevs_operational": 3, 00:11:13.140 "base_bdevs_list": [ 00:11:13.140 { 00:11:13.140 "name": "BaseBdev1", 00:11:13.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.140 "is_configured": false, 00:11:13.140 "data_offset": 0, 00:11:13.140 "data_size": 0 00:11:13.140 }, 00:11:13.140 { 00:11:13.140 "name": "BaseBdev2", 00:11:13.140 "uuid": "6fbcd050-27b2-4c4b-8356-1e27b671f373", 00:11:13.140 "is_configured": true, 00:11:13.140 "data_offset": 0, 00:11:13.140 "data_size": 65536 00:11:13.140 }, 00:11:13.140 { 00:11:13.140 "name": "BaseBdev3", 00:11:13.140 "uuid": "50e1b4ae-9b4e-4c46-b051-3ab3c8402012", 00:11:13.140 "is_configured": true, 00:11:13.140 "data_offset": 0, 00:11:13.140 "data_size": 65536 00:11:13.140 } 00:11:13.140 ] 00:11:13.140 }' 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.140 10:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.707 [2024-11-15 10:39:44.078384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.707 "name": "Existed_Raid", 00:11:13.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.707 "strip_size_kb": 64, 00:11:13.707 "state": "configuring", 00:11:13.707 "raid_level": "concat", 00:11:13.707 "superblock": false, 00:11:13.707 "num_base_bdevs": 3, 00:11:13.707 "num_base_bdevs_discovered": 1, 00:11:13.707 "num_base_bdevs_operational": 3, 00:11:13.707 "base_bdevs_list": [ 00:11:13.707 { 00:11:13.707 "name": "BaseBdev1", 00:11:13.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.707 "is_configured": false, 00:11:13.707 "data_offset": 0, 00:11:13.707 "data_size": 0 00:11:13.707 }, 00:11:13.707 { 00:11:13.707 "name": null, 00:11:13.707 "uuid": "6fbcd050-27b2-4c4b-8356-1e27b671f373", 00:11:13.707 "is_configured": false, 00:11:13.707 "data_offset": 0, 00:11:13.707 "data_size": 65536 00:11:13.707 }, 00:11:13.707 { 00:11:13.707 "name": "BaseBdev3", 00:11:13.707 "uuid": "50e1b4ae-9b4e-4c46-b051-3ab3c8402012", 00:11:13.707 "is_configured": true, 00:11:13.707 "data_offset": 0, 00:11:13.707 "data_size": 65536 00:11:13.707 } 00:11:13.707 ] 00:11:13.707 }' 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.707 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.286 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.287 [2024-11-15 10:39:44.672255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.287 BaseBdev1 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.287 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.287 [ 00:11:14.287 { 00:11:14.287 "name": "BaseBdev1", 00:11:14.287 "aliases": [ 00:11:14.287 "9342a230-d3be-43c0-9188-b619366351a4" 00:11:14.287 ], 00:11:14.287 "product_name": "Malloc disk", 00:11:14.287 "block_size": 512, 00:11:14.287 "num_blocks": 65536, 00:11:14.287 "uuid": "9342a230-d3be-43c0-9188-b619366351a4", 00:11:14.287 "assigned_rate_limits": { 00:11:14.287 "rw_ios_per_sec": 0, 00:11:14.287 "rw_mbytes_per_sec": 0, 00:11:14.287 "r_mbytes_per_sec": 0, 00:11:14.287 "w_mbytes_per_sec": 0 00:11:14.287 }, 00:11:14.287 "claimed": true, 00:11:14.287 "claim_type": "exclusive_write", 00:11:14.287 "zoned": false, 00:11:14.287 "supported_io_types": { 00:11:14.287 "read": true, 00:11:14.287 "write": true, 00:11:14.288 "unmap": true, 00:11:14.288 "flush": true, 00:11:14.288 "reset": true, 00:11:14.288 "nvme_admin": false, 00:11:14.288 "nvme_io": false, 00:11:14.288 "nvme_io_md": false, 00:11:14.288 "write_zeroes": true, 00:11:14.288 "zcopy": true, 00:11:14.288 "get_zone_info": false, 00:11:14.288 "zone_management": false, 00:11:14.288 "zone_append": false, 00:11:14.288 "compare": false, 00:11:14.288 "compare_and_write": false, 00:11:14.288 "abort": true, 00:11:14.288 "seek_hole": false, 00:11:14.288 "seek_data": false, 00:11:14.288 "copy": true, 00:11:14.288 "nvme_iov_md": false 00:11:14.288 }, 00:11:14.288 "memory_domains": [ 00:11:14.288 { 00:11:14.288 "dma_device_id": "system", 00:11:14.288 "dma_device_type": 1 00:11:14.288 }, 00:11:14.288 { 00:11:14.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.288 "dma_device_type": 2 00:11:14.288 } 00:11:14.288 ], 00:11:14.288 "driver_specific": {} 00:11:14.288 } 00:11:14.288 ] 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.288 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.289 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.289 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.289 "name": "Existed_Raid", 00:11:14.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.289 "strip_size_kb": 64, 00:11:14.289 "state": "configuring", 00:11:14.289 "raid_level": "concat", 00:11:14.289 "superblock": false, 00:11:14.289 "num_base_bdevs": 3, 00:11:14.289 "num_base_bdevs_discovered": 2, 00:11:14.289 "num_base_bdevs_operational": 3, 00:11:14.289 "base_bdevs_list": [ 00:11:14.289 { 00:11:14.289 "name": "BaseBdev1", 00:11:14.289 "uuid": "9342a230-d3be-43c0-9188-b619366351a4", 00:11:14.289 "is_configured": true, 00:11:14.289 "data_offset": 0, 00:11:14.289 "data_size": 65536 00:11:14.289 }, 00:11:14.289 { 00:11:14.289 "name": null, 00:11:14.289 "uuid": "6fbcd050-27b2-4c4b-8356-1e27b671f373", 00:11:14.289 "is_configured": false, 00:11:14.289 "data_offset": 0, 00:11:14.289 "data_size": 65536 00:11:14.289 }, 00:11:14.289 { 00:11:14.289 "name": "BaseBdev3", 00:11:14.289 "uuid": "50e1b4ae-9b4e-4c46-b051-3ab3c8402012", 00:11:14.289 "is_configured": true, 00:11:14.289 "data_offset": 0, 00:11:14.289 "data_size": 65536 00:11:14.289 } 00:11:14.289 ] 00:11:14.289 }' 00:11:14.289 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.289 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.863 [2024-11-15 10:39:45.276498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.863 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.863 "name": "Existed_Raid", 00:11:14.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.863 "strip_size_kb": 64, 00:11:14.863 "state": "configuring", 00:11:14.863 "raid_level": "concat", 00:11:14.863 "superblock": false, 00:11:14.863 "num_base_bdevs": 3, 00:11:14.863 "num_base_bdevs_discovered": 1, 00:11:14.863 "num_base_bdevs_operational": 3, 00:11:14.863 "base_bdevs_list": [ 00:11:14.863 { 00:11:14.863 "name": "BaseBdev1", 00:11:14.863 "uuid": "9342a230-d3be-43c0-9188-b619366351a4", 00:11:14.863 "is_configured": true, 00:11:14.863 "data_offset": 0, 00:11:14.863 "data_size": 65536 00:11:14.863 }, 00:11:14.863 { 00:11:14.863 "name": null, 00:11:14.863 "uuid": "6fbcd050-27b2-4c4b-8356-1e27b671f373", 00:11:14.863 "is_configured": false, 00:11:14.863 "data_offset": 0, 00:11:14.863 "data_size": 65536 00:11:14.863 }, 00:11:14.863 { 00:11:14.864 "name": null, 00:11:14.864 "uuid": "50e1b4ae-9b4e-4c46-b051-3ab3c8402012", 00:11:14.864 "is_configured": false, 00:11:14.864 "data_offset": 0, 00:11:14.864 "data_size": 65536 00:11:14.864 } 00:11:14.864 ] 00:11:14.864 }' 00:11:14.864 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.864 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.447 [2024-11-15 10:39:45.832695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.447 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.447 "name": "Existed_Raid", 00:11:15.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.447 "strip_size_kb": 64, 00:11:15.447 "state": "configuring", 00:11:15.447 "raid_level": "concat", 00:11:15.447 "superblock": false, 00:11:15.447 "num_base_bdevs": 3, 00:11:15.447 "num_base_bdevs_discovered": 2, 00:11:15.447 "num_base_bdevs_operational": 3, 00:11:15.447 "base_bdevs_list": [ 00:11:15.447 { 00:11:15.447 "name": "BaseBdev1", 00:11:15.447 "uuid": "9342a230-d3be-43c0-9188-b619366351a4", 00:11:15.447 "is_configured": true, 00:11:15.447 "data_offset": 0, 00:11:15.447 "data_size": 65536 00:11:15.447 }, 00:11:15.447 { 00:11:15.447 "name": null, 00:11:15.447 "uuid": "6fbcd050-27b2-4c4b-8356-1e27b671f373", 00:11:15.447 "is_configured": false, 00:11:15.447 "data_offset": 0, 00:11:15.447 "data_size": 65536 00:11:15.447 }, 00:11:15.447 { 00:11:15.447 "name": "BaseBdev3", 00:11:15.447 "uuid": "50e1b4ae-9b4e-4c46-b051-3ab3c8402012", 00:11:15.447 "is_configured": true, 00:11:15.447 "data_offset": 0, 00:11:15.447 "data_size": 65536 00:11:15.447 } 00:11:15.447 ] 00:11:15.447 }' 00:11:15.448 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.448 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.014 [2024-11-15 10:39:46.376850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.014 "name": "Existed_Raid", 00:11:16.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.014 "strip_size_kb": 64, 00:11:16.014 "state": "configuring", 00:11:16.014 "raid_level": "concat", 00:11:16.014 "superblock": false, 00:11:16.014 "num_base_bdevs": 3, 00:11:16.014 "num_base_bdevs_discovered": 1, 00:11:16.014 "num_base_bdevs_operational": 3, 00:11:16.014 "base_bdevs_list": [ 00:11:16.014 { 00:11:16.014 "name": null, 00:11:16.014 "uuid": "9342a230-d3be-43c0-9188-b619366351a4", 00:11:16.014 "is_configured": false, 00:11:16.014 "data_offset": 0, 00:11:16.014 "data_size": 65536 00:11:16.014 }, 00:11:16.014 { 00:11:16.014 "name": null, 00:11:16.014 "uuid": "6fbcd050-27b2-4c4b-8356-1e27b671f373", 00:11:16.014 "is_configured": false, 00:11:16.014 "data_offset": 0, 00:11:16.014 "data_size": 65536 00:11:16.014 }, 00:11:16.014 { 00:11:16.014 "name": "BaseBdev3", 00:11:16.014 "uuid": "50e1b4ae-9b4e-4c46-b051-3ab3c8402012", 00:11:16.014 "is_configured": true, 00:11:16.014 "data_offset": 0, 00:11:16.014 "data_size": 65536 00:11:16.014 } 00:11:16.014 ] 00:11:16.014 }' 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.014 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.583 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.583 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.583 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.583 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.583 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.583 [2024-11-15 10:39:47.028900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.583 "name": "Existed_Raid", 00:11:16.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.583 "strip_size_kb": 64, 00:11:16.583 "state": "configuring", 00:11:16.583 "raid_level": "concat", 00:11:16.583 "superblock": false, 00:11:16.583 "num_base_bdevs": 3, 00:11:16.583 "num_base_bdevs_discovered": 2, 00:11:16.583 "num_base_bdevs_operational": 3, 00:11:16.583 "base_bdevs_list": [ 00:11:16.583 { 00:11:16.583 "name": null, 00:11:16.583 "uuid": "9342a230-d3be-43c0-9188-b619366351a4", 00:11:16.583 "is_configured": false, 00:11:16.583 "data_offset": 0, 00:11:16.583 "data_size": 65536 00:11:16.583 }, 00:11:16.583 { 00:11:16.583 "name": "BaseBdev2", 00:11:16.583 "uuid": "6fbcd050-27b2-4c4b-8356-1e27b671f373", 00:11:16.583 "is_configured": true, 00:11:16.583 "data_offset": 0, 00:11:16.583 "data_size": 65536 00:11:16.583 }, 00:11:16.583 { 00:11:16.583 "name": "BaseBdev3", 00:11:16.583 "uuid": "50e1b4ae-9b4e-4c46-b051-3ab3c8402012", 00:11:16.583 "is_configured": true, 00:11:16.583 "data_offset": 0, 00:11:16.583 "data_size": 65536 00:11:16.583 } 00:11:16.583 ] 00:11:16.583 }' 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.583 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.150 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:17.150 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.150 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.150 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.150 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.151 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:17.151 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.151 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.151 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.151 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:17.151 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.151 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9342a230-d3be-43c0-9188-b619366351a4 00:11:17.151 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.151 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.409 [2024-11-15 10:39:47.727327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:17.409 [2024-11-15 10:39:47.727407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:17.409 [2024-11-15 10:39:47.727425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:17.409 [2024-11-15 10:39:47.727738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:17.409 [2024-11-15 10:39:47.727929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:17.409 [2024-11-15 10:39:47.727954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:17.409 [2024-11-15 10:39:47.728252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.409 NewBaseBdev 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.409 [ 00:11:17.409 { 00:11:17.409 "name": "NewBaseBdev", 00:11:17.409 "aliases": [ 00:11:17.409 "9342a230-d3be-43c0-9188-b619366351a4" 00:11:17.409 ], 00:11:17.409 "product_name": "Malloc disk", 00:11:17.409 "block_size": 512, 00:11:17.409 "num_blocks": 65536, 00:11:17.409 "uuid": "9342a230-d3be-43c0-9188-b619366351a4", 00:11:17.409 "assigned_rate_limits": { 00:11:17.409 "rw_ios_per_sec": 0, 00:11:17.409 "rw_mbytes_per_sec": 0, 00:11:17.409 "r_mbytes_per_sec": 0, 00:11:17.409 "w_mbytes_per_sec": 0 00:11:17.409 }, 00:11:17.409 "claimed": true, 00:11:17.409 "claim_type": "exclusive_write", 00:11:17.409 "zoned": false, 00:11:17.409 "supported_io_types": { 00:11:17.409 "read": true, 00:11:17.409 "write": true, 00:11:17.409 "unmap": true, 00:11:17.409 "flush": true, 00:11:17.409 "reset": true, 00:11:17.409 "nvme_admin": false, 00:11:17.409 "nvme_io": false, 00:11:17.409 "nvme_io_md": false, 00:11:17.409 "write_zeroes": true, 00:11:17.409 "zcopy": true, 00:11:17.409 "get_zone_info": false, 00:11:17.409 "zone_management": false, 00:11:17.409 "zone_append": false, 00:11:17.409 "compare": false, 00:11:17.409 "compare_and_write": false, 00:11:17.409 "abort": true, 00:11:17.409 "seek_hole": false, 00:11:17.409 "seek_data": false, 00:11:17.409 "copy": true, 00:11:17.409 "nvme_iov_md": false 00:11:17.409 }, 00:11:17.409 "memory_domains": [ 00:11:17.409 { 00:11:17.409 "dma_device_id": "system", 00:11:17.409 "dma_device_type": 1 00:11:17.409 }, 00:11:17.409 { 00:11:17.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.409 "dma_device_type": 2 00:11:17.409 } 00:11:17.409 ], 00:11:17.409 "driver_specific": {} 00:11:17.409 } 00:11:17.409 ] 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.409 "name": "Existed_Raid", 00:11:17.409 "uuid": "6666c65c-7c3f-49ac-9cd9-b816a901a27e", 00:11:17.409 "strip_size_kb": 64, 00:11:17.409 "state": "online", 00:11:17.409 "raid_level": "concat", 00:11:17.409 "superblock": false, 00:11:17.409 "num_base_bdevs": 3, 00:11:17.409 "num_base_bdevs_discovered": 3, 00:11:17.409 "num_base_bdevs_operational": 3, 00:11:17.409 "base_bdevs_list": [ 00:11:17.409 { 00:11:17.409 "name": "NewBaseBdev", 00:11:17.409 "uuid": "9342a230-d3be-43c0-9188-b619366351a4", 00:11:17.409 "is_configured": true, 00:11:17.409 "data_offset": 0, 00:11:17.409 "data_size": 65536 00:11:17.409 }, 00:11:17.409 { 00:11:17.409 "name": "BaseBdev2", 00:11:17.409 "uuid": "6fbcd050-27b2-4c4b-8356-1e27b671f373", 00:11:17.409 "is_configured": true, 00:11:17.409 "data_offset": 0, 00:11:17.409 "data_size": 65536 00:11:17.409 }, 00:11:17.409 { 00:11:17.409 "name": "BaseBdev3", 00:11:17.409 "uuid": "50e1b4ae-9b4e-4c46-b051-3ab3c8402012", 00:11:17.409 "is_configured": true, 00:11:17.409 "data_offset": 0, 00:11:17.409 "data_size": 65536 00:11:17.409 } 00:11:17.409 ] 00:11:17.409 }' 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.409 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.975 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.975 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:17.975 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.975 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.975 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.975 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.975 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:17.975 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.975 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.975 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.975 [2024-11-15 10:39:48.255955] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.976 "name": "Existed_Raid", 00:11:17.976 "aliases": [ 00:11:17.976 "6666c65c-7c3f-49ac-9cd9-b816a901a27e" 00:11:17.976 ], 00:11:17.976 "product_name": "Raid Volume", 00:11:17.976 "block_size": 512, 00:11:17.976 "num_blocks": 196608, 00:11:17.976 "uuid": "6666c65c-7c3f-49ac-9cd9-b816a901a27e", 00:11:17.976 "assigned_rate_limits": { 00:11:17.976 "rw_ios_per_sec": 0, 00:11:17.976 "rw_mbytes_per_sec": 0, 00:11:17.976 "r_mbytes_per_sec": 0, 00:11:17.976 "w_mbytes_per_sec": 0 00:11:17.976 }, 00:11:17.976 "claimed": false, 00:11:17.976 "zoned": false, 00:11:17.976 "supported_io_types": { 00:11:17.976 "read": true, 00:11:17.976 "write": true, 00:11:17.976 "unmap": true, 00:11:17.976 "flush": true, 00:11:17.976 "reset": true, 00:11:17.976 "nvme_admin": false, 00:11:17.976 "nvme_io": false, 00:11:17.976 "nvme_io_md": false, 00:11:17.976 "write_zeroes": true, 00:11:17.976 "zcopy": false, 00:11:17.976 "get_zone_info": false, 00:11:17.976 "zone_management": false, 00:11:17.976 "zone_append": false, 00:11:17.976 "compare": false, 00:11:17.976 "compare_and_write": false, 00:11:17.976 "abort": false, 00:11:17.976 "seek_hole": false, 00:11:17.976 "seek_data": false, 00:11:17.976 "copy": false, 00:11:17.976 "nvme_iov_md": false 00:11:17.976 }, 00:11:17.976 "memory_domains": [ 00:11:17.976 { 00:11:17.976 "dma_device_id": "system", 00:11:17.976 "dma_device_type": 1 00:11:17.976 }, 00:11:17.976 { 00:11:17.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.976 "dma_device_type": 2 00:11:17.976 }, 00:11:17.976 { 00:11:17.976 "dma_device_id": "system", 00:11:17.976 "dma_device_type": 1 00:11:17.976 }, 00:11:17.976 { 00:11:17.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.976 "dma_device_type": 2 00:11:17.976 }, 00:11:17.976 { 00:11:17.976 "dma_device_id": "system", 00:11:17.976 "dma_device_type": 1 00:11:17.976 }, 00:11:17.976 { 00:11:17.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.976 "dma_device_type": 2 00:11:17.976 } 00:11:17.976 ], 00:11:17.976 "driver_specific": { 00:11:17.976 "raid": { 00:11:17.976 "uuid": "6666c65c-7c3f-49ac-9cd9-b816a901a27e", 00:11:17.976 "strip_size_kb": 64, 00:11:17.976 "state": "online", 00:11:17.976 "raid_level": "concat", 00:11:17.976 "superblock": false, 00:11:17.976 "num_base_bdevs": 3, 00:11:17.976 "num_base_bdevs_discovered": 3, 00:11:17.976 "num_base_bdevs_operational": 3, 00:11:17.976 "base_bdevs_list": [ 00:11:17.976 { 00:11:17.976 "name": "NewBaseBdev", 00:11:17.976 "uuid": "9342a230-d3be-43c0-9188-b619366351a4", 00:11:17.976 "is_configured": true, 00:11:17.976 "data_offset": 0, 00:11:17.976 "data_size": 65536 00:11:17.976 }, 00:11:17.976 { 00:11:17.976 "name": "BaseBdev2", 00:11:17.976 "uuid": "6fbcd050-27b2-4c4b-8356-1e27b671f373", 00:11:17.976 "is_configured": true, 00:11:17.976 "data_offset": 0, 00:11:17.976 "data_size": 65536 00:11:17.976 }, 00:11:17.976 { 00:11:17.976 "name": "BaseBdev3", 00:11:17.976 "uuid": "50e1b4ae-9b4e-4c46-b051-3ab3c8402012", 00:11:17.976 "is_configured": true, 00:11:17.976 "data_offset": 0, 00:11:17.976 "data_size": 65536 00:11:17.976 } 00:11:17.976 ] 00:11:17.976 } 00:11:17.976 } 00:11:17.976 }' 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:17.976 BaseBdev2 00:11:17.976 BaseBdev3' 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.976 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.234 [2024-11-15 10:39:48.588070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.234 [2024-11-15 10:39:48.588105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.234 [2024-11-15 10:39:48.588197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.234 [2024-11-15 10:39:48.588274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.234 [2024-11-15 10:39:48.588295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65833 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65833 ']' 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65833 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65833 00:11:18.234 killing process with pid 65833 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65833' 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65833 00:11:18.234 [2024-11-15 10:39:48.625414] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.234 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65833 00:11:18.490 [2024-11-15 10:39:48.882336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.437 ************************************ 00:11:19.437 END TEST raid_state_function_test 00:11:19.437 ************************************ 00:11:19.437 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:19.437 00:11:19.437 real 0m11.625s 00:11:19.437 user 0m19.479s 00:11:19.437 sys 0m1.459s 00:11:19.437 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:19.437 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.438 10:39:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:19.438 10:39:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:19.438 10:39:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:19.438 10:39:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.438 ************************************ 00:11:19.438 START TEST raid_state_function_test_sb 00:11:19.438 ************************************ 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:19.438 Process raid pid: 66465 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66465 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66465' 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66465 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66465 ']' 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:19.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:19.438 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.696 [2024-11-15 10:39:50.049131] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:11:19.696 [2024-11-15 10:39:50.049971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.696 [2024-11-15 10:39:50.242634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.954 [2024-11-15 10:39:50.368486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.212 [2024-11-15 10:39:50.591575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.212 [2024-11-15 10:39:50.591631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.786 [2024-11-15 10:39:51.071729] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.786 [2024-11-15 10:39:51.071799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.786 [2024-11-15 10:39:51.071818] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.786 [2024-11-15 10:39:51.071836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.786 [2024-11-15 10:39:51.071846] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.786 [2024-11-15 10:39:51.071861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.786 "name": "Existed_Raid", 00:11:20.786 "uuid": "091fec73-eadb-4c70-9195-29a6aab973d4", 00:11:20.786 "strip_size_kb": 64, 00:11:20.786 "state": "configuring", 00:11:20.786 "raid_level": "concat", 00:11:20.786 "superblock": true, 00:11:20.786 "num_base_bdevs": 3, 00:11:20.786 "num_base_bdevs_discovered": 0, 00:11:20.786 "num_base_bdevs_operational": 3, 00:11:20.786 "base_bdevs_list": [ 00:11:20.786 { 00:11:20.786 "name": "BaseBdev1", 00:11:20.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.786 "is_configured": false, 00:11:20.786 "data_offset": 0, 00:11:20.786 "data_size": 0 00:11:20.786 }, 00:11:20.786 { 00:11:20.786 "name": "BaseBdev2", 00:11:20.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.786 "is_configured": false, 00:11:20.786 "data_offset": 0, 00:11:20.786 "data_size": 0 00:11:20.786 }, 00:11:20.786 { 00:11:20.786 "name": "BaseBdev3", 00:11:20.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.786 "is_configured": false, 00:11:20.786 "data_offset": 0, 00:11:20.786 "data_size": 0 00:11:20.786 } 00:11:20.786 ] 00:11:20.786 }' 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.786 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.057 [2024-11-15 10:39:51.591784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.057 [2024-11-15 10:39:51.591830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.057 [2024-11-15 10:39:51.599787] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.057 [2024-11-15 10:39:51.599845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.057 [2024-11-15 10:39:51.599862] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.057 [2024-11-15 10:39:51.599879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.057 [2024-11-15 10:39:51.599890] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.057 [2024-11-15 10:39:51.599905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.057 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.315 [2024-11-15 10:39:51.640144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.315 BaseBdev1 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.315 [ 00:11:21.315 { 00:11:21.315 "name": "BaseBdev1", 00:11:21.315 "aliases": [ 00:11:21.315 "c1fbb080-671c-4de4-954e-69759709c428" 00:11:21.315 ], 00:11:21.315 "product_name": "Malloc disk", 00:11:21.315 "block_size": 512, 00:11:21.315 "num_blocks": 65536, 00:11:21.315 "uuid": "c1fbb080-671c-4de4-954e-69759709c428", 00:11:21.315 "assigned_rate_limits": { 00:11:21.315 "rw_ios_per_sec": 0, 00:11:21.315 "rw_mbytes_per_sec": 0, 00:11:21.315 "r_mbytes_per_sec": 0, 00:11:21.315 "w_mbytes_per_sec": 0 00:11:21.315 }, 00:11:21.315 "claimed": true, 00:11:21.315 "claim_type": "exclusive_write", 00:11:21.315 "zoned": false, 00:11:21.315 "supported_io_types": { 00:11:21.315 "read": true, 00:11:21.315 "write": true, 00:11:21.315 "unmap": true, 00:11:21.315 "flush": true, 00:11:21.315 "reset": true, 00:11:21.315 "nvme_admin": false, 00:11:21.315 "nvme_io": false, 00:11:21.315 "nvme_io_md": false, 00:11:21.315 "write_zeroes": true, 00:11:21.315 "zcopy": true, 00:11:21.315 "get_zone_info": false, 00:11:21.315 "zone_management": false, 00:11:21.315 "zone_append": false, 00:11:21.315 "compare": false, 00:11:21.315 "compare_and_write": false, 00:11:21.315 "abort": true, 00:11:21.315 "seek_hole": false, 00:11:21.315 "seek_data": false, 00:11:21.315 "copy": true, 00:11:21.315 "nvme_iov_md": false 00:11:21.315 }, 00:11:21.315 "memory_domains": [ 00:11:21.315 { 00:11:21.315 "dma_device_id": "system", 00:11:21.315 "dma_device_type": 1 00:11:21.315 }, 00:11:21.315 { 00:11:21.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.315 "dma_device_type": 2 00:11:21.315 } 00:11:21.315 ], 00:11:21.315 "driver_specific": {} 00:11:21.315 } 00:11:21.315 ] 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.315 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.315 "name": "Existed_Raid", 00:11:21.315 "uuid": "ab2e0d51-f0bf-434d-9f7f-48d191b8dcc0", 00:11:21.315 "strip_size_kb": 64, 00:11:21.315 "state": "configuring", 00:11:21.315 "raid_level": "concat", 00:11:21.316 "superblock": true, 00:11:21.316 "num_base_bdevs": 3, 00:11:21.316 "num_base_bdevs_discovered": 1, 00:11:21.316 "num_base_bdevs_operational": 3, 00:11:21.316 "base_bdevs_list": [ 00:11:21.316 { 00:11:21.316 "name": "BaseBdev1", 00:11:21.316 "uuid": "c1fbb080-671c-4de4-954e-69759709c428", 00:11:21.316 "is_configured": true, 00:11:21.316 "data_offset": 2048, 00:11:21.316 "data_size": 63488 00:11:21.316 }, 00:11:21.316 { 00:11:21.316 "name": "BaseBdev2", 00:11:21.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.316 "is_configured": false, 00:11:21.316 "data_offset": 0, 00:11:21.316 "data_size": 0 00:11:21.316 }, 00:11:21.316 { 00:11:21.316 "name": "BaseBdev3", 00:11:21.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.316 "is_configured": false, 00:11:21.316 "data_offset": 0, 00:11:21.316 "data_size": 0 00:11:21.316 } 00:11:21.316 ] 00:11:21.316 }' 00:11:21.316 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.316 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.882 [2024-11-15 10:39:52.176344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.882 [2024-11-15 10:39:52.176422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.882 [2024-11-15 10:39:52.184415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.882 [2024-11-15 10:39:52.186688] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.882 [2024-11-15 10:39:52.186887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.882 [2024-11-15 10:39:52.186916] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.882 [2024-11-15 10:39:52.186936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.882 "name": "Existed_Raid", 00:11:21.882 "uuid": "0c1a2898-4f02-4b41-ade7-6c5d15da1330", 00:11:21.882 "strip_size_kb": 64, 00:11:21.882 "state": "configuring", 00:11:21.882 "raid_level": "concat", 00:11:21.882 "superblock": true, 00:11:21.882 "num_base_bdevs": 3, 00:11:21.882 "num_base_bdevs_discovered": 1, 00:11:21.882 "num_base_bdevs_operational": 3, 00:11:21.882 "base_bdevs_list": [ 00:11:21.882 { 00:11:21.882 "name": "BaseBdev1", 00:11:21.882 "uuid": "c1fbb080-671c-4de4-954e-69759709c428", 00:11:21.882 "is_configured": true, 00:11:21.882 "data_offset": 2048, 00:11:21.882 "data_size": 63488 00:11:21.882 }, 00:11:21.882 { 00:11:21.882 "name": "BaseBdev2", 00:11:21.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.882 "is_configured": false, 00:11:21.882 "data_offset": 0, 00:11:21.882 "data_size": 0 00:11:21.882 }, 00:11:21.882 { 00:11:21.882 "name": "BaseBdev3", 00:11:21.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.882 "is_configured": false, 00:11:21.882 "data_offset": 0, 00:11:21.882 "data_size": 0 00:11:21.882 } 00:11:21.882 ] 00:11:21.882 }' 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.882 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.449 [2024-11-15 10:39:52.762482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.449 BaseBdev2 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.449 [ 00:11:22.449 { 00:11:22.449 "name": "BaseBdev2", 00:11:22.449 "aliases": [ 00:11:22.449 "784f851a-543a-48d8-a231-acd96f0dac06" 00:11:22.449 ], 00:11:22.449 "product_name": "Malloc disk", 00:11:22.449 "block_size": 512, 00:11:22.449 "num_blocks": 65536, 00:11:22.449 "uuid": "784f851a-543a-48d8-a231-acd96f0dac06", 00:11:22.449 "assigned_rate_limits": { 00:11:22.449 "rw_ios_per_sec": 0, 00:11:22.449 "rw_mbytes_per_sec": 0, 00:11:22.449 "r_mbytes_per_sec": 0, 00:11:22.449 "w_mbytes_per_sec": 0 00:11:22.449 }, 00:11:22.449 "claimed": true, 00:11:22.449 "claim_type": "exclusive_write", 00:11:22.449 "zoned": false, 00:11:22.449 "supported_io_types": { 00:11:22.449 "read": true, 00:11:22.449 "write": true, 00:11:22.449 "unmap": true, 00:11:22.449 "flush": true, 00:11:22.449 "reset": true, 00:11:22.449 "nvme_admin": false, 00:11:22.449 "nvme_io": false, 00:11:22.449 "nvme_io_md": false, 00:11:22.449 "write_zeroes": true, 00:11:22.449 "zcopy": true, 00:11:22.449 "get_zone_info": false, 00:11:22.449 "zone_management": false, 00:11:22.449 "zone_append": false, 00:11:22.449 "compare": false, 00:11:22.449 "compare_and_write": false, 00:11:22.449 "abort": true, 00:11:22.449 "seek_hole": false, 00:11:22.449 "seek_data": false, 00:11:22.449 "copy": true, 00:11:22.449 "nvme_iov_md": false 00:11:22.449 }, 00:11:22.449 "memory_domains": [ 00:11:22.449 { 00:11:22.449 "dma_device_id": "system", 00:11:22.449 "dma_device_type": 1 00:11:22.449 }, 00:11:22.449 { 00:11:22.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.449 "dma_device_type": 2 00:11:22.449 } 00:11:22.449 ], 00:11:22.449 "driver_specific": {} 00:11:22.449 } 00:11:22.449 ] 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.449 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.450 "name": "Existed_Raid", 00:11:22.450 "uuid": "0c1a2898-4f02-4b41-ade7-6c5d15da1330", 00:11:22.450 "strip_size_kb": 64, 00:11:22.450 "state": "configuring", 00:11:22.450 "raid_level": "concat", 00:11:22.450 "superblock": true, 00:11:22.450 "num_base_bdevs": 3, 00:11:22.450 "num_base_bdevs_discovered": 2, 00:11:22.450 "num_base_bdevs_operational": 3, 00:11:22.450 "base_bdevs_list": [ 00:11:22.450 { 00:11:22.450 "name": "BaseBdev1", 00:11:22.450 "uuid": "c1fbb080-671c-4de4-954e-69759709c428", 00:11:22.450 "is_configured": true, 00:11:22.450 "data_offset": 2048, 00:11:22.450 "data_size": 63488 00:11:22.450 }, 00:11:22.450 { 00:11:22.450 "name": "BaseBdev2", 00:11:22.450 "uuid": "784f851a-543a-48d8-a231-acd96f0dac06", 00:11:22.450 "is_configured": true, 00:11:22.450 "data_offset": 2048, 00:11:22.450 "data_size": 63488 00:11:22.450 }, 00:11:22.450 { 00:11:22.450 "name": "BaseBdev3", 00:11:22.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.450 "is_configured": false, 00:11:22.450 "data_offset": 0, 00:11:22.450 "data_size": 0 00:11:22.450 } 00:11:22.450 ] 00:11:22.450 }' 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.450 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.016 [2024-11-15 10:39:53.346933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.016 [2024-11-15 10:39:53.347482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:23.016 [2024-11-15 10:39:53.347521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:23.016 BaseBdev3 00:11:23.016 [2024-11-15 10:39:53.347843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:23.016 [2024-11-15 10:39:53.348062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:23.016 [2024-11-15 10:39:53.348086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:23.016 [2024-11-15 10:39:53.348270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.016 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.016 [ 00:11:23.016 { 00:11:23.016 "name": "BaseBdev3", 00:11:23.016 "aliases": [ 00:11:23.016 "871cabe6-cd88-4a87-9841-3fb10dd50f99" 00:11:23.016 ], 00:11:23.016 "product_name": "Malloc disk", 00:11:23.016 "block_size": 512, 00:11:23.016 "num_blocks": 65536, 00:11:23.016 "uuid": "871cabe6-cd88-4a87-9841-3fb10dd50f99", 00:11:23.016 "assigned_rate_limits": { 00:11:23.016 "rw_ios_per_sec": 0, 00:11:23.016 "rw_mbytes_per_sec": 0, 00:11:23.016 "r_mbytes_per_sec": 0, 00:11:23.016 "w_mbytes_per_sec": 0 00:11:23.016 }, 00:11:23.016 "claimed": true, 00:11:23.016 "claim_type": "exclusive_write", 00:11:23.016 "zoned": false, 00:11:23.016 "supported_io_types": { 00:11:23.016 "read": true, 00:11:23.016 "write": true, 00:11:23.016 "unmap": true, 00:11:23.016 "flush": true, 00:11:23.016 "reset": true, 00:11:23.016 "nvme_admin": false, 00:11:23.016 "nvme_io": false, 00:11:23.016 "nvme_io_md": false, 00:11:23.016 "write_zeroes": true, 00:11:23.016 "zcopy": true, 00:11:23.016 "get_zone_info": false, 00:11:23.016 "zone_management": false, 00:11:23.016 "zone_append": false, 00:11:23.016 "compare": false, 00:11:23.016 "compare_and_write": false, 00:11:23.016 "abort": true, 00:11:23.016 "seek_hole": false, 00:11:23.016 "seek_data": false, 00:11:23.016 "copy": true, 00:11:23.016 "nvme_iov_md": false 00:11:23.016 }, 00:11:23.016 "memory_domains": [ 00:11:23.016 { 00:11:23.016 "dma_device_id": "system", 00:11:23.016 "dma_device_type": 1 00:11:23.016 }, 00:11:23.016 { 00:11:23.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.017 "dma_device_type": 2 00:11:23.017 } 00:11:23.017 ], 00:11:23.017 "driver_specific": {} 00:11:23.017 } 00:11:23.017 ] 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.017 "name": "Existed_Raid", 00:11:23.017 "uuid": "0c1a2898-4f02-4b41-ade7-6c5d15da1330", 00:11:23.017 "strip_size_kb": 64, 00:11:23.017 "state": "online", 00:11:23.017 "raid_level": "concat", 00:11:23.017 "superblock": true, 00:11:23.017 "num_base_bdevs": 3, 00:11:23.017 "num_base_bdevs_discovered": 3, 00:11:23.017 "num_base_bdevs_operational": 3, 00:11:23.017 "base_bdevs_list": [ 00:11:23.017 { 00:11:23.017 "name": "BaseBdev1", 00:11:23.017 "uuid": "c1fbb080-671c-4de4-954e-69759709c428", 00:11:23.017 "is_configured": true, 00:11:23.017 "data_offset": 2048, 00:11:23.017 "data_size": 63488 00:11:23.017 }, 00:11:23.017 { 00:11:23.017 "name": "BaseBdev2", 00:11:23.017 "uuid": "784f851a-543a-48d8-a231-acd96f0dac06", 00:11:23.017 "is_configured": true, 00:11:23.017 "data_offset": 2048, 00:11:23.017 "data_size": 63488 00:11:23.017 }, 00:11:23.017 { 00:11:23.017 "name": "BaseBdev3", 00:11:23.017 "uuid": "871cabe6-cd88-4a87-9841-3fb10dd50f99", 00:11:23.017 "is_configured": true, 00:11:23.017 "data_offset": 2048, 00:11:23.017 "data_size": 63488 00:11:23.017 } 00:11:23.017 ] 00:11:23.017 }' 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.017 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.582 [2024-11-15 10:39:53.923538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.582 "name": "Existed_Raid", 00:11:23.582 "aliases": [ 00:11:23.582 "0c1a2898-4f02-4b41-ade7-6c5d15da1330" 00:11:23.582 ], 00:11:23.582 "product_name": "Raid Volume", 00:11:23.582 "block_size": 512, 00:11:23.582 "num_blocks": 190464, 00:11:23.582 "uuid": "0c1a2898-4f02-4b41-ade7-6c5d15da1330", 00:11:23.582 "assigned_rate_limits": { 00:11:23.582 "rw_ios_per_sec": 0, 00:11:23.582 "rw_mbytes_per_sec": 0, 00:11:23.582 "r_mbytes_per_sec": 0, 00:11:23.582 "w_mbytes_per_sec": 0 00:11:23.582 }, 00:11:23.582 "claimed": false, 00:11:23.582 "zoned": false, 00:11:23.582 "supported_io_types": { 00:11:23.582 "read": true, 00:11:23.582 "write": true, 00:11:23.582 "unmap": true, 00:11:23.582 "flush": true, 00:11:23.582 "reset": true, 00:11:23.582 "nvme_admin": false, 00:11:23.582 "nvme_io": false, 00:11:23.582 "nvme_io_md": false, 00:11:23.582 "write_zeroes": true, 00:11:23.582 "zcopy": false, 00:11:23.582 "get_zone_info": false, 00:11:23.582 "zone_management": false, 00:11:23.582 "zone_append": false, 00:11:23.582 "compare": false, 00:11:23.582 "compare_and_write": false, 00:11:23.582 "abort": false, 00:11:23.582 "seek_hole": false, 00:11:23.582 "seek_data": false, 00:11:23.582 "copy": false, 00:11:23.582 "nvme_iov_md": false 00:11:23.582 }, 00:11:23.582 "memory_domains": [ 00:11:23.582 { 00:11:23.582 "dma_device_id": "system", 00:11:23.582 "dma_device_type": 1 00:11:23.582 }, 00:11:23.582 { 00:11:23.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.582 "dma_device_type": 2 00:11:23.582 }, 00:11:23.582 { 00:11:23.582 "dma_device_id": "system", 00:11:23.582 "dma_device_type": 1 00:11:23.582 }, 00:11:23.582 { 00:11:23.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.582 "dma_device_type": 2 00:11:23.582 }, 00:11:23.582 { 00:11:23.582 "dma_device_id": "system", 00:11:23.582 "dma_device_type": 1 00:11:23.582 }, 00:11:23.582 { 00:11:23.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.582 "dma_device_type": 2 00:11:23.582 } 00:11:23.582 ], 00:11:23.582 "driver_specific": { 00:11:23.582 "raid": { 00:11:23.582 "uuid": "0c1a2898-4f02-4b41-ade7-6c5d15da1330", 00:11:23.582 "strip_size_kb": 64, 00:11:23.582 "state": "online", 00:11:23.582 "raid_level": "concat", 00:11:23.582 "superblock": true, 00:11:23.582 "num_base_bdevs": 3, 00:11:23.582 "num_base_bdevs_discovered": 3, 00:11:23.582 "num_base_bdevs_operational": 3, 00:11:23.582 "base_bdevs_list": [ 00:11:23.582 { 00:11:23.582 "name": "BaseBdev1", 00:11:23.582 "uuid": "c1fbb080-671c-4de4-954e-69759709c428", 00:11:23.582 "is_configured": true, 00:11:23.582 "data_offset": 2048, 00:11:23.582 "data_size": 63488 00:11:23.582 }, 00:11:23.582 { 00:11:23.582 "name": "BaseBdev2", 00:11:23.582 "uuid": "784f851a-543a-48d8-a231-acd96f0dac06", 00:11:23.582 "is_configured": true, 00:11:23.582 "data_offset": 2048, 00:11:23.582 "data_size": 63488 00:11:23.582 }, 00:11:23.582 { 00:11:23.582 "name": "BaseBdev3", 00:11:23.582 "uuid": "871cabe6-cd88-4a87-9841-3fb10dd50f99", 00:11:23.582 "is_configured": true, 00:11:23.582 "data_offset": 2048, 00:11:23.582 "data_size": 63488 00:11:23.582 } 00:11:23.582 ] 00:11:23.582 } 00:11:23.582 } 00:11:23.582 }' 00:11:23.582 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.582 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:23.582 BaseBdev2 00:11:23.582 BaseBdev3' 00:11:23.582 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.582 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.582 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.582 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:23.582 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.582 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.582 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.582 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.840 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.841 [2024-11-15 10:39:54.247277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.841 [2024-11-15 10:39:54.247312] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.841 [2024-11-15 10:39:54.247410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.841 "name": "Existed_Raid", 00:11:23.841 "uuid": "0c1a2898-4f02-4b41-ade7-6c5d15da1330", 00:11:23.841 "strip_size_kb": 64, 00:11:23.841 "state": "offline", 00:11:23.841 "raid_level": "concat", 00:11:23.841 "superblock": true, 00:11:23.841 "num_base_bdevs": 3, 00:11:23.841 "num_base_bdevs_discovered": 2, 00:11:23.841 "num_base_bdevs_operational": 2, 00:11:23.841 "base_bdevs_list": [ 00:11:23.841 { 00:11:23.841 "name": null, 00:11:23.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.841 "is_configured": false, 00:11:23.841 "data_offset": 0, 00:11:23.841 "data_size": 63488 00:11:23.841 }, 00:11:23.841 { 00:11:23.841 "name": "BaseBdev2", 00:11:23.841 "uuid": "784f851a-543a-48d8-a231-acd96f0dac06", 00:11:23.841 "is_configured": true, 00:11:23.841 "data_offset": 2048, 00:11:23.841 "data_size": 63488 00:11:23.841 }, 00:11:23.841 { 00:11:23.841 "name": "BaseBdev3", 00:11:23.841 "uuid": "871cabe6-cd88-4a87-9841-3fb10dd50f99", 00:11:23.841 "is_configured": true, 00:11:23.841 "data_offset": 2048, 00:11:23.841 "data_size": 63488 00:11:23.841 } 00:11:23.841 ] 00:11:23.841 }' 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.841 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 [2024-11-15 10:39:54.860177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.668 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.668 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.668 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:24.668 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.668 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.668 [2024-11-15 10:39:55.000715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:24.668 [2024-11-15 10:39:55.000912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.668 BaseBdev2 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.668 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.668 [ 00:11:24.668 { 00:11:24.668 "name": "BaseBdev2", 00:11:24.668 "aliases": [ 00:11:24.668 "3f17d9bf-4358-48de-bcbb-0b0046974086" 00:11:24.668 ], 00:11:24.668 "product_name": "Malloc disk", 00:11:24.668 "block_size": 512, 00:11:24.668 "num_blocks": 65536, 00:11:24.668 "uuid": "3f17d9bf-4358-48de-bcbb-0b0046974086", 00:11:24.668 "assigned_rate_limits": { 00:11:24.668 "rw_ios_per_sec": 0, 00:11:24.668 "rw_mbytes_per_sec": 0, 00:11:24.668 "r_mbytes_per_sec": 0, 00:11:24.668 "w_mbytes_per_sec": 0 00:11:24.668 }, 00:11:24.668 "claimed": false, 00:11:24.668 "zoned": false, 00:11:24.668 "supported_io_types": { 00:11:24.668 "read": true, 00:11:24.668 "write": true, 00:11:24.668 "unmap": true, 00:11:24.668 "flush": true, 00:11:24.668 "reset": true, 00:11:24.668 "nvme_admin": false, 00:11:24.668 "nvme_io": false, 00:11:24.668 "nvme_io_md": false, 00:11:24.668 "write_zeroes": true, 00:11:24.668 "zcopy": true, 00:11:24.668 "get_zone_info": false, 00:11:24.668 "zone_management": false, 00:11:24.668 "zone_append": false, 00:11:24.668 "compare": false, 00:11:24.668 "compare_and_write": false, 00:11:24.668 "abort": true, 00:11:24.669 "seek_hole": false, 00:11:24.669 "seek_data": false, 00:11:24.669 "copy": true, 00:11:24.669 "nvme_iov_md": false 00:11:24.669 }, 00:11:24.669 "memory_domains": [ 00:11:24.669 { 00:11:24.669 "dma_device_id": "system", 00:11:24.669 "dma_device_type": 1 00:11:24.669 }, 00:11:24.669 { 00:11:24.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.669 "dma_device_type": 2 00:11:24.669 } 00:11:24.669 ], 00:11:24.669 "driver_specific": {} 00:11:24.669 } 00:11:24.669 ] 00:11:24.669 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.669 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:24.669 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.669 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.669 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.669 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.669 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.927 BaseBdev3 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.927 [ 00:11:24.927 { 00:11:24.927 "name": "BaseBdev3", 00:11:24.927 "aliases": [ 00:11:24.927 "6b4dff04-68b2-47a2-becb-00ebfe2c0676" 00:11:24.927 ], 00:11:24.927 "product_name": "Malloc disk", 00:11:24.927 "block_size": 512, 00:11:24.927 "num_blocks": 65536, 00:11:24.927 "uuid": "6b4dff04-68b2-47a2-becb-00ebfe2c0676", 00:11:24.927 "assigned_rate_limits": { 00:11:24.927 "rw_ios_per_sec": 0, 00:11:24.927 "rw_mbytes_per_sec": 0, 00:11:24.927 "r_mbytes_per_sec": 0, 00:11:24.927 "w_mbytes_per_sec": 0 00:11:24.927 }, 00:11:24.927 "claimed": false, 00:11:24.927 "zoned": false, 00:11:24.927 "supported_io_types": { 00:11:24.927 "read": true, 00:11:24.927 "write": true, 00:11:24.927 "unmap": true, 00:11:24.927 "flush": true, 00:11:24.927 "reset": true, 00:11:24.927 "nvme_admin": false, 00:11:24.927 "nvme_io": false, 00:11:24.927 "nvme_io_md": false, 00:11:24.927 "write_zeroes": true, 00:11:24.927 "zcopy": true, 00:11:24.927 "get_zone_info": false, 00:11:24.927 "zone_management": false, 00:11:24.927 "zone_append": false, 00:11:24.927 "compare": false, 00:11:24.927 "compare_and_write": false, 00:11:24.927 "abort": true, 00:11:24.927 "seek_hole": false, 00:11:24.927 "seek_data": false, 00:11:24.927 "copy": true, 00:11:24.927 "nvme_iov_md": false 00:11:24.927 }, 00:11:24.927 "memory_domains": [ 00:11:24.927 { 00:11:24.927 "dma_device_id": "system", 00:11:24.927 "dma_device_type": 1 00:11:24.927 }, 00:11:24.927 { 00:11:24.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.927 "dma_device_type": 2 00:11:24.927 } 00:11:24.927 ], 00:11:24.927 "driver_specific": {} 00:11:24.927 } 00:11:24.927 ] 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.927 [2024-11-15 10:39:55.281924] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.927 [2024-11-15 10:39:55.281982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.927 [2024-11-15 10:39:55.282016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.927 [2024-11-15 10:39:55.284274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.927 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.927 "name": "Existed_Raid", 00:11:24.927 "uuid": "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095", 00:11:24.927 "strip_size_kb": 64, 00:11:24.927 "state": "configuring", 00:11:24.927 "raid_level": "concat", 00:11:24.927 "superblock": true, 00:11:24.927 "num_base_bdevs": 3, 00:11:24.927 "num_base_bdevs_discovered": 2, 00:11:24.927 "num_base_bdevs_operational": 3, 00:11:24.927 "base_bdevs_list": [ 00:11:24.927 { 00:11:24.927 "name": "BaseBdev1", 00:11:24.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.927 "is_configured": false, 00:11:24.927 "data_offset": 0, 00:11:24.927 "data_size": 0 00:11:24.927 }, 00:11:24.927 { 00:11:24.927 "name": "BaseBdev2", 00:11:24.927 "uuid": "3f17d9bf-4358-48de-bcbb-0b0046974086", 00:11:24.927 "is_configured": true, 00:11:24.927 "data_offset": 2048, 00:11:24.927 "data_size": 63488 00:11:24.927 }, 00:11:24.927 { 00:11:24.927 "name": "BaseBdev3", 00:11:24.927 "uuid": "6b4dff04-68b2-47a2-becb-00ebfe2c0676", 00:11:24.927 "is_configured": true, 00:11:24.927 "data_offset": 2048, 00:11:24.927 "data_size": 63488 00:11:24.927 } 00:11:24.927 ] 00:11:24.927 }' 00:11:24.928 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.928 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.492 [2024-11-15 10:39:55.810074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.492 "name": "Existed_Raid", 00:11:25.492 "uuid": "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095", 00:11:25.492 "strip_size_kb": 64, 00:11:25.492 "state": "configuring", 00:11:25.492 "raid_level": "concat", 00:11:25.492 "superblock": true, 00:11:25.492 "num_base_bdevs": 3, 00:11:25.492 "num_base_bdevs_discovered": 1, 00:11:25.492 "num_base_bdevs_operational": 3, 00:11:25.492 "base_bdevs_list": [ 00:11:25.492 { 00:11:25.492 "name": "BaseBdev1", 00:11:25.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.492 "is_configured": false, 00:11:25.492 "data_offset": 0, 00:11:25.492 "data_size": 0 00:11:25.492 }, 00:11:25.492 { 00:11:25.492 "name": null, 00:11:25.492 "uuid": "3f17d9bf-4358-48de-bcbb-0b0046974086", 00:11:25.492 "is_configured": false, 00:11:25.492 "data_offset": 0, 00:11:25.492 "data_size": 63488 00:11:25.492 }, 00:11:25.492 { 00:11:25.492 "name": "BaseBdev3", 00:11:25.492 "uuid": "6b4dff04-68b2-47a2-becb-00ebfe2c0676", 00:11:25.492 "is_configured": true, 00:11:25.492 "data_offset": 2048, 00:11:25.492 "data_size": 63488 00:11:25.492 } 00:11:25.492 ] 00:11:25.492 }' 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.492 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.057 [2024-11-15 10:39:56.403810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.057 BaseBdev1 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.057 [ 00:11:26.057 { 00:11:26.057 "name": "BaseBdev1", 00:11:26.057 "aliases": [ 00:11:26.057 "381eb4b4-448d-4651-bbda-6115c08ce61e" 00:11:26.057 ], 00:11:26.057 "product_name": "Malloc disk", 00:11:26.057 "block_size": 512, 00:11:26.057 "num_blocks": 65536, 00:11:26.057 "uuid": "381eb4b4-448d-4651-bbda-6115c08ce61e", 00:11:26.057 "assigned_rate_limits": { 00:11:26.057 "rw_ios_per_sec": 0, 00:11:26.057 "rw_mbytes_per_sec": 0, 00:11:26.057 "r_mbytes_per_sec": 0, 00:11:26.057 "w_mbytes_per_sec": 0 00:11:26.057 }, 00:11:26.057 "claimed": true, 00:11:26.057 "claim_type": "exclusive_write", 00:11:26.057 "zoned": false, 00:11:26.057 "supported_io_types": { 00:11:26.057 "read": true, 00:11:26.057 "write": true, 00:11:26.057 "unmap": true, 00:11:26.057 "flush": true, 00:11:26.057 "reset": true, 00:11:26.057 "nvme_admin": false, 00:11:26.057 "nvme_io": false, 00:11:26.057 "nvme_io_md": false, 00:11:26.057 "write_zeroes": true, 00:11:26.057 "zcopy": true, 00:11:26.057 "get_zone_info": false, 00:11:26.057 "zone_management": false, 00:11:26.057 "zone_append": false, 00:11:26.057 "compare": false, 00:11:26.057 "compare_and_write": false, 00:11:26.057 "abort": true, 00:11:26.057 "seek_hole": false, 00:11:26.057 "seek_data": false, 00:11:26.057 "copy": true, 00:11:26.057 "nvme_iov_md": false 00:11:26.057 }, 00:11:26.057 "memory_domains": [ 00:11:26.057 { 00:11:26.057 "dma_device_id": "system", 00:11:26.057 "dma_device_type": 1 00:11:26.057 }, 00:11:26.057 { 00:11:26.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.057 "dma_device_type": 2 00:11:26.057 } 00:11:26.057 ], 00:11:26.057 "driver_specific": {} 00:11:26.057 } 00:11:26.057 ] 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.057 "name": "Existed_Raid", 00:11:26.057 "uuid": "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095", 00:11:26.057 "strip_size_kb": 64, 00:11:26.057 "state": "configuring", 00:11:26.057 "raid_level": "concat", 00:11:26.057 "superblock": true, 00:11:26.057 "num_base_bdevs": 3, 00:11:26.057 "num_base_bdevs_discovered": 2, 00:11:26.057 "num_base_bdevs_operational": 3, 00:11:26.057 "base_bdevs_list": [ 00:11:26.057 { 00:11:26.057 "name": "BaseBdev1", 00:11:26.057 "uuid": "381eb4b4-448d-4651-bbda-6115c08ce61e", 00:11:26.057 "is_configured": true, 00:11:26.057 "data_offset": 2048, 00:11:26.057 "data_size": 63488 00:11:26.057 }, 00:11:26.057 { 00:11:26.057 "name": null, 00:11:26.057 "uuid": "3f17d9bf-4358-48de-bcbb-0b0046974086", 00:11:26.057 "is_configured": false, 00:11:26.057 "data_offset": 0, 00:11:26.057 "data_size": 63488 00:11:26.057 }, 00:11:26.057 { 00:11:26.057 "name": "BaseBdev3", 00:11:26.057 "uuid": "6b4dff04-68b2-47a2-becb-00ebfe2c0676", 00:11:26.057 "is_configured": true, 00:11:26.057 "data_offset": 2048, 00:11:26.057 "data_size": 63488 00:11:26.057 } 00:11:26.057 ] 00:11:26.057 }' 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.057 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.622 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:26.622 10:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.622 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.622 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.622 10:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.622 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:26.622 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:26.622 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.622 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.622 [2024-11-15 10:39:57.036156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.622 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.622 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:26.622 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.622 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.622 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.623 "name": "Existed_Raid", 00:11:26.623 "uuid": "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095", 00:11:26.623 "strip_size_kb": 64, 00:11:26.623 "state": "configuring", 00:11:26.623 "raid_level": "concat", 00:11:26.623 "superblock": true, 00:11:26.623 "num_base_bdevs": 3, 00:11:26.623 "num_base_bdevs_discovered": 1, 00:11:26.623 "num_base_bdevs_operational": 3, 00:11:26.623 "base_bdevs_list": [ 00:11:26.623 { 00:11:26.623 "name": "BaseBdev1", 00:11:26.623 "uuid": "381eb4b4-448d-4651-bbda-6115c08ce61e", 00:11:26.623 "is_configured": true, 00:11:26.623 "data_offset": 2048, 00:11:26.623 "data_size": 63488 00:11:26.623 }, 00:11:26.623 { 00:11:26.623 "name": null, 00:11:26.623 "uuid": "3f17d9bf-4358-48de-bcbb-0b0046974086", 00:11:26.623 "is_configured": false, 00:11:26.623 "data_offset": 0, 00:11:26.623 "data_size": 63488 00:11:26.623 }, 00:11:26.623 { 00:11:26.623 "name": null, 00:11:26.623 "uuid": "6b4dff04-68b2-47a2-becb-00ebfe2c0676", 00:11:26.623 "is_configured": false, 00:11:26.623 "data_offset": 0, 00:11:26.623 "data_size": 63488 00:11:26.623 } 00:11:26.623 ] 00:11:26.623 }' 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.623 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.190 [2024-11-15 10:39:57.616328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.190 "name": "Existed_Raid", 00:11:27.190 "uuid": "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095", 00:11:27.190 "strip_size_kb": 64, 00:11:27.190 "state": "configuring", 00:11:27.190 "raid_level": "concat", 00:11:27.190 "superblock": true, 00:11:27.190 "num_base_bdevs": 3, 00:11:27.190 "num_base_bdevs_discovered": 2, 00:11:27.190 "num_base_bdevs_operational": 3, 00:11:27.190 "base_bdevs_list": [ 00:11:27.190 { 00:11:27.190 "name": "BaseBdev1", 00:11:27.190 "uuid": "381eb4b4-448d-4651-bbda-6115c08ce61e", 00:11:27.190 "is_configured": true, 00:11:27.190 "data_offset": 2048, 00:11:27.190 "data_size": 63488 00:11:27.190 }, 00:11:27.190 { 00:11:27.190 "name": null, 00:11:27.190 "uuid": "3f17d9bf-4358-48de-bcbb-0b0046974086", 00:11:27.190 "is_configured": false, 00:11:27.190 "data_offset": 0, 00:11:27.190 "data_size": 63488 00:11:27.190 }, 00:11:27.190 { 00:11:27.190 "name": "BaseBdev3", 00:11:27.190 "uuid": "6b4dff04-68b2-47a2-becb-00ebfe2c0676", 00:11:27.190 "is_configured": true, 00:11:27.190 "data_offset": 2048, 00:11:27.190 "data_size": 63488 00:11:27.190 } 00:11:27.190 ] 00:11:27.190 }' 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.190 10:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.758 [2024-11-15 10:39:58.168533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.758 "name": "Existed_Raid", 00:11:27.758 "uuid": "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095", 00:11:27.758 "strip_size_kb": 64, 00:11:27.758 "state": "configuring", 00:11:27.758 "raid_level": "concat", 00:11:27.758 "superblock": true, 00:11:27.758 "num_base_bdevs": 3, 00:11:27.758 "num_base_bdevs_discovered": 1, 00:11:27.758 "num_base_bdevs_operational": 3, 00:11:27.758 "base_bdevs_list": [ 00:11:27.758 { 00:11:27.758 "name": null, 00:11:27.758 "uuid": "381eb4b4-448d-4651-bbda-6115c08ce61e", 00:11:27.758 "is_configured": false, 00:11:27.758 "data_offset": 0, 00:11:27.758 "data_size": 63488 00:11:27.758 }, 00:11:27.758 { 00:11:27.758 "name": null, 00:11:27.758 "uuid": "3f17d9bf-4358-48de-bcbb-0b0046974086", 00:11:27.758 "is_configured": false, 00:11:27.758 "data_offset": 0, 00:11:27.758 "data_size": 63488 00:11:27.758 }, 00:11:27.758 { 00:11:27.758 "name": "BaseBdev3", 00:11:27.758 "uuid": "6b4dff04-68b2-47a2-becb-00ebfe2c0676", 00:11:27.758 "is_configured": true, 00:11:27.758 "data_offset": 2048, 00:11:27.758 "data_size": 63488 00:11:27.758 } 00:11:27.758 ] 00:11:27.758 }' 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.758 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.333 [2024-11-15 10:39:58.804866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.333 "name": "Existed_Raid", 00:11:28.333 "uuid": "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095", 00:11:28.333 "strip_size_kb": 64, 00:11:28.333 "state": "configuring", 00:11:28.333 "raid_level": "concat", 00:11:28.333 "superblock": true, 00:11:28.333 "num_base_bdevs": 3, 00:11:28.333 "num_base_bdevs_discovered": 2, 00:11:28.333 "num_base_bdevs_operational": 3, 00:11:28.333 "base_bdevs_list": [ 00:11:28.333 { 00:11:28.333 "name": null, 00:11:28.333 "uuid": "381eb4b4-448d-4651-bbda-6115c08ce61e", 00:11:28.333 "is_configured": false, 00:11:28.333 "data_offset": 0, 00:11:28.333 "data_size": 63488 00:11:28.333 }, 00:11:28.333 { 00:11:28.333 "name": "BaseBdev2", 00:11:28.333 "uuid": "3f17d9bf-4358-48de-bcbb-0b0046974086", 00:11:28.333 "is_configured": true, 00:11:28.333 "data_offset": 2048, 00:11:28.333 "data_size": 63488 00:11:28.333 }, 00:11:28.333 { 00:11:28.333 "name": "BaseBdev3", 00:11:28.333 "uuid": "6b4dff04-68b2-47a2-becb-00ebfe2c0676", 00:11:28.333 "is_configured": true, 00:11:28.333 "data_offset": 2048, 00:11:28.333 "data_size": 63488 00:11:28.333 } 00:11:28.333 ] 00:11:28.333 }' 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.333 10:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 381eb4b4-448d-4651-bbda-6115c08ce61e 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.900 [2024-11-15 10:39:59.426386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:28.900 [2024-11-15 10:39:59.426858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.900 [2024-11-15 10:39:59.426891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:28.900 [2024-11-15 10:39:59.427211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:28.900 NewBaseBdev 00:11:28.900 [2024-11-15 10:39:59.427417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.900 [2024-11-15 10:39:59.427435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:28.900 [2024-11-15 10:39:59.427605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.900 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.900 [ 00:11:28.900 { 00:11:28.900 "name": "NewBaseBdev", 00:11:28.900 "aliases": [ 00:11:28.900 "381eb4b4-448d-4651-bbda-6115c08ce61e" 00:11:28.900 ], 00:11:28.900 "product_name": "Malloc disk", 00:11:28.900 "block_size": 512, 00:11:28.900 "num_blocks": 65536, 00:11:28.900 "uuid": "381eb4b4-448d-4651-bbda-6115c08ce61e", 00:11:28.900 "assigned_rate_limits": { 00:11:28.900 "rw_ios_per_sec": 0, 00:11:28.900 "rw_mbytes_per_sec": 0, 00:11:28.900 "r_mbytes_per_sec": 0, 00:11:28.900 "w_mbytes_per_sec": 0 00:11:28.900 }, 00:11:28.900 "claimed": true, 00:11:28.900 "claim_type": "exclusive_write", 00:11:28.900 "zoned": false, 00:11:28.900 "supported_io_types": { 00:11:28.900 "read": true, 00:11:28.900 "write": true, 00:11:28.900 "unmap": true, 00:11:28.900 "flush": true, 00:11:28.900 "reset": true, 00:11:28.900 "nvme_admin": false, 00:11:28.900 "nvme_io": false, 00:11:28.900 "nvme_io_md": false, 00:11:28.900 "write_zeroes": true, 00:11:28.900 "zcopy": true, 00:11:28.900 "get_zone_info": false, 00:11:28.900 "zone_management": false, 00:11:29.159 "zone_append": false, 00:11:29.159 "compare": false, 00:11:29.159 "compare_and_write": false, 00:11:29.159 "abort": true, 00:11:29.159 "seek_hole": false, 00:11:29.159 "seek_data": false, 00:11:29.159 "copy": true, 00:11:29.159 "nvme_iov_md": false 00:11:29.159 }, 00:11:29.159 "memory_domains": [ 00:11:29.159 { 00:11:29.159 "dma_device_id": "system", 00:11:29.159 "dma_device_type": 1 00:11:29.159 }, 00:11:29.159 { 00:11:29.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.159 "dma_device_type": 2 00:11:29.159 } 00:11:29.159 ], 00:11:29.159 "driver_specific": {} 00:11:29.159 } 00:11:29.159 ] 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.159 "name": "Existed_Raid", 00:11:29.159 "uuid": "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095", 00:11:29.159 "strip_size_kb": 64, 00:11:29.159 "state": "online", 00:11:29.159 "raid_level": "concat", 00:11:29.159 "superblock": true, 00:11:29.159 "num_base_bdevs": 3, 00:11:29.159 "num_base_bdevs_discovered": 3, 00:11:29.159 "num_base_bdevs_operational": 3, 00:11:29.159 "base_bdevs_list": [ 00:11:29.159 { 00:11:29.159 "name": "NewBaseBdev", 00:11:29.159 "uuid": "381eb4b4-448d-4651-bbda-6115c08ce61e", 00:11:29.159 "is_configured": true, 00:11:29.159 "data_offset": 2048, 00:11:29.159 "data_size": 63488 00:11:29.159 }, 00:11:29.159 { 00:11:29.159 "name": "BaseBdev2", 00:11:29.159 "uuid": "3f17d9bf-4358-48de-bcbb-0b0046974086", 00:11:29.159 "is_configured": true, 00:11:29.159 "data_offset": 2048, 00:11:29.159 "data_size": 63488 00:11:29.159 }, 00:11:29.159 { 00:11:29.159 "name": "BaseBdev3", 00:11:29.159 "uuid": "6b4dff04-68b2-47a2-becb-00ebfe2c0676", 00:11:29.159 "is_configured": true, 00:11:29.159 "data_offset": 2048, 00:11:29.159 "data_size": 63488 00:11:29.159 } 00:11:29.159 ] 00:11:29.159 }' 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.159 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.727 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.727 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.727 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.727 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.727 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.727 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.727 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.727 10:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.727 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.727 10:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.727 [2024-11-15 10:39:59.994946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.727 "name": "Existed_Raid", 00:11:29.727 "aliases": [ 00:11:29.727 "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095" 00:11:29.727 ], 00:11:29.727 "product_name": "Raid Volume", 00:11:29.727 "block_size": 512, 00:11:29.727 "num_blocks": 190464, 00:11:29.727 "uuid": "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095", 00:11:29.727 "assigned_rate_limits": { 00:11:29.727 "rw_ios_per_sec": 0, 00:11:29.727 "rw_mbytes_per_sec": 0, 00:11:29.727 "r_mbytes_per_sec": 0, 00:11:29.727 "w_mbytes_per_sec": 0 00:11:29.727 }, 00:11:29.727 "claimed": false, 00:11:29.727 "zoned": false, 00:11:29.727 "supported_io_types": { 00:11:29.727 "read": true, 00:11:29.727 "write": true, 00:11:29.727 "unmap": true, 00:11:29.727 "flush": true, 00:11:29.727 "reset": true, 00:11:29.727 "nvme_admin": false, 00:11:29.727 "nvme_io": false, 00:11:29.727 "nvme_io_md": false, 00:11:29.727 "write_zeroes": true, 00:11:29.727 "zcopy": false, 00:11:29.727 "get_zone_info": false, 00:11:29.727 "zone_management": false, 00:11:29.727 "zone_append": false, 00:11:29.727 "compare": false, 00:11:29.727 "compare_and_write": false, 00:11:29.727 "abort": false, 00:11:29.727 "seek_hole": false, 00:11:29.727 "seek_data": false, 00:11:29.727 "copy": false, 00:11:29.727 "nvme_iov_md": false 00:11:29.727 }, 00:11:29.727 "memory_domains": [ 00:11:29.727 { 00:11:29.727 "dma_device_id": "system", 00:11:29.727 "dma_device_type": 1 00:11:29.727 }, 00:11:29.727 { 00:11:29.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.727 "dma_device_type": 2 00:11:29.727 }, 00:11:29.727 { 00:11:29.727 "dma_device_id": "system", 00:11:29.727 "dma_device_type": 1 00:11:29.727 }, 00:11:29.727 { 00:11:29.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.727 "dma_device_type": 2 00:11:29.727 }, 00:11:29.727 { 00:11:29.727 "dma_device_id": "system", 00:11:29.727 "dma_device_type": 1 00:11:29.727 }, 00:11:29.727 { 00:11:29.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.727 "dma_device_type": 2 00:11:29.727 } 00:11:29.727 ], 00:11:29.727 "driver_specific": { 00:11:29.727 "raid": { 00:11:29.727 "uuid": "11c1d92d-7d8b-4e8a-aa72-b57cd4cfc095", 00:11:29.727 "strip_size_kb": 64, 00:11:29.727 "state": "online", 00:11:29.727 "raid_level": "concat", 00:11:29.727 "superblock": true, 00:11:29.727 "num_base_bdevs": 3, 00:11:29.727 "num_base_bdevs_discovered": 3, 00:11:29.727 "num_base_bdevs_operational": 3, 00:11:29.727 "base_bdevs_list": [ 00:11:29.727 { 00:11:29.727 "name": "NewBaseBdev", 00:11:29.727 "uuid": "381eb4b4-448d-4651-bbda-6115c08ce61e", 00:11:29.727 "is_configured": true, 00:11:29.727 "data_offset": 2048, 00:11:29.727 "data_size": 63488 00:11:29.727 }, 00:11:29.727 { 00:11:29.727 "name": "BaseBdev2", 00:11:29.727 "uuid": "3f17d9bf-4358-48de-bcbb-0b0046974086", 00:11:29.727 "is_configured": true, 00:11:29.727 "data_offset": 2048, 00:11:29.727 "data_size": 63488 00:11:29.727 }, 00:11:29.727 { 00:11:29.727 "name": "BaseBdev3", 00:11:29.727 "uuid": "6b4dff04-68b2-47a2-becb-00ebfe2c0676", 00:11:29.727 "is_configured": true, 00:11:29.727 "data_offset": 2048, 00:11:29.727 "data_size": 63488 00:11:29.727 } 00:11:29.727 ] 00:11:29.727 } 00:11:29.727 } 00:11:29.727 }' 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:29.727 BaseBdev2 00:11:29.727 BaseBdev3' 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.727 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.985 [2024-11-15 10:40:00.298647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.985 [2024-11-15 10:40:00.298683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.985 [2024-11-15 10:40:00.298777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.985 [2024-11-15 10:40:00.298854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.985 [2024-11-15 10:40:00.298876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66465 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66465 ']' 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66465 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66465 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:29.985 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:29.985 killing process with pid 66465 00:11:29.986 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66465' 00:11:29.986 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66465 00:11:29.986 [2024-11-15 10:40:00.333607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.986 10:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66465 00:11:30.244 [2024-11-15 10:40:00.588929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.176 10:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:31.176 00:11:31.176 real 0m11.649s 00:11:31.176 user 0m19.566s 00:11:31.176 sys 0m1.423s 00:11:31.176 10:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.176 10:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.176 ************************************ 00:11:31.176 END TEST raid_state_function_test_sb 00:11:31.176 ************************************ 00:11:31.176 10:40:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:31.176 10:40:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:31.176 10:40:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:31.176 10:40:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.176 ************************************ 00:11:31.176 START TEST raid_superblock_test 00:11:31.176 ************************************ 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:31.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67097 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67097 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 67097 ']' 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:31.177 10:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.435 [2024-11-15 10:40:01.735635] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:11:31.435 [2024-11-15 10:40:01.735960] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67097 ] 00:11:31.435 [2024-11-15 10:40:01.914886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.693 [2024-11-15 10:40:02.041142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.951 [2024-11-15 10:40:02.259741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.951 [2024-11-15 10:40:02.259957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.209 malloc1 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.209 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.468 [2024-11-15 10:40:02.772073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:32.468 [2024-11-15 10:40:02.772151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.468 [2024-11-15 10:40:02.772184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:32.468 [2024-11-15 10:40:02.772200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.468 [2024-11-15 10:40:02.774803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.468 [2024-11-15 10:40:02.774986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:32.468 pt1 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.468 malloc2 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.468 [2024-11-15 10:40:02.820093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:32.468 [2024-11-15 10:40:02.820176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.468 [2024-11-15 10:40:02.820213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:32.468 [2024-11-15 10:40:02.820228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.468 [2024-11-15 10:40:02.822803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.468 [2024-11-15 10:40:02.822850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:32.468 pt2 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.468 malloc3 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.468 [2024-11-15 10:40:02.888631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:32.468 [2024-11-15 10:40:02.888701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.468 [2024-11-15 10:40:02.888735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:32.468 [2024-11-15 10:40:02.888750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.468 [2024-11-15 10:40:02.891296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.468 [2024-11-15 10:40:02.891344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:32.468 pt3 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.468 [2024-11-15 10:40:02.900694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:32.468 [2024-11-15 10:40:02.902933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:32.468 [2024-11-15 10:40:02.903189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:32.468 [2024-11-15 10:40:02.903434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:32.468 [2024-11-15 10:40:02.903459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:32.468 [2024-11-15 10:40:02.903790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:32.468 [2024-11-15 10:40:02.903990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:32.468 [2024-11-15 10:40:02.904007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:32.468 [2024-11-15 10:40:02.904202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.468 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.468 "name": "raid_bdev1", 00:11:32.468 "uuid": "517eef64-952b-4224-854f-b5e91519b4a5", 00:11:32.468 "strip_size_kb": 64, 00:11:32.468 "state": "online", 00:11:32.468 "raid_level": "concat", 00:11:32.468 "superblock": true, 00:11:32.468 "num_base_bdevs": 3, 00:11:32.468 "num_base_bdevs_discovered": 3, 00:11:32.468 "num_base_bdevs_operational": 3, 00:11:32.468 "base_bdevs_list": [ 00:11:32.468 { 00:11:32.468 "name": "pt1", 00:11:32.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.468 "is_configured": true, 00:11:32.468 "data_offset": 2048, 00:11:32.469 "data_size": 63488 00:11:32.469 }, 00:11:32.469 { 00:11:32.469 "name": "pt2", 00:11:32.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.469 "is_configured": true, 00:11:32.469 "data_offset": 2048, 00:11:32.469 "data_size": 63488 00:11:32.469 }, 00:11:32.469 { 00:11:32.469 "name": "pt3", 00:11:32.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.469 "is_configured": true, 00:11:32.469 "data_offset": 2048, 00:11:32.469 "data_size": 63488 00:11:32.469 } 00:11:32.469 ] 00:11:32.469 }' 00:11:32.469 10:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.469 10:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.035 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.036 [2024-11-15 10:40:03.405151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.036 "name": "raid_bdev1", 00:11:33.036 "aliases": [ 00:11:33.036 "517eef64-952b-4224-854f-b5e91519b4a5" 00:11:33.036 ], 00:11:33.036 "product_name": "Raid Volume", 00:11:33.036 "block_size": 512, 00:11:33.036 "num_blocks": 190464, 00:11:33.036 "uuid": "517eef64-952b-4224-854f-b5e91519b4a5", 00:11:33.036 "assigned_rate_limits": { 00:11:33.036 "rw_ios_per_sec": 0, 00:11:33.036 "rw_mbytes_per_sec": 0, 00:11:33.036 "r_mbytes_per_sec": 0, 00:11:33.036 "w_mbytes_per_sec": 0 00:11:33.036 }, 00:11:33.036 "claimed": false, 00:11:33.036 "zoned": false, 00:11:33.036 "supported_io_types": { 00:11:33.036 "read": true, 00:11:33.036 "write": true, 00:11:33.036 "unmap": true, 00:11:33.036 "flush": true, 00:11:33.036 "reset": true, 00:11:33.036 "nvme_admin": false, 00:11:33.036 "nvme_io": false, 00:11:33.036 "nvme_io_md": false, 00:11:33.036 "write_zeroes": true, 00:11:33.036 "zcopy": false, 00:11:33.036 "get_zone_info": false, 00:11:33.036 "zone_management": false, 00:11:33.036 "zone_append": false, 00:11:33.036 "compare": false, 00:11:33.036 "compare_and_write": false, 00:11:33.036 "abort": false, 00:11:33.036 "seek_hole": false, 00:11:33.036 "seek_data": false, 00:11:33.036 "copy": false, 00:11:33.036 "nvme_iov_md": false 00:11:33.036 }, 00:11:33.036 "memory_domains": [ 00:11:33.036 { 00:11:33.036 "dma_device_id": "system", 00:11:33.036 "dma_device_type": 1 00:11:33.036 }, 00:11:33.036 { 00:11:33.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.036 "dma_device_type": 2 00:11:33.036 }, 00:11:33.036 { 00:11:33.036 "dma_device_id": "system", 00:11:33.036 "dma_device_type": 1 00:11:33.036 }, 00:11:33.036 { 00:11:33.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.036 "dma_device_type": 2 00:11:33.036 }, 00:11:33.036 { 00:11:33.036 "dma_device_id": "system", 00:11:33.036 "dma_device_type": 1 00:11:33.036 }, 00:11:33.036 { 00:11:33.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.036 "dma_device_type": 2 00:11:33.036 } 00:11:33.036 ], 00:11:33.036 "driver_specific": { 00:11:33.036 "raid": { 00:11:33.036 "uuid": "517eef64-952b-4224-854f-b5e91519b4a5", 00:11:33.036 "strip_size_kb": 64, 00:11:33.036 "state": "online", 00:11:33.036 "raid_level": "concat", 00:11:33.036 "superblock": true, 00:11:33.036 "num_base_bdevs": 3, 00:11:33.036 "num_base_bdevs_discovered": 3, 00:11:33.036 "num_base_bdevs_operational": 3, 00:11:33.036 "base_bdevs_list": [ 00:11:33.036 { 00:11:33.036 "name": "pt1", 00:11:33.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.036 "is_configured": true, 00:11:33.036 "data_offset": 2048, 00:11:33.036 "data_size": 63488 00:11:33.036 }, 00:11:33.036 { 00:11:33.036 "name": "pt2", 00:11:33.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.036 "is_configured": true, 00:11:33.036 "data_offset": 2048, 00:11:33.036 "data_size": 63488 00:11:33.036 }, 00:11:33.036 { 00:11:33.036 "name": "pt3", 00:11:33.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.036 "is_configured": true, 00:11:33.036 "data_offset": 2048, 00:11:33.036 "data_size": 63488 00:11:33.036 } 00:11:33.036 ] 00:11:33.036 } 00:11:33.036 } 00:11:33.036 }' 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:33.036 pt2 00:11:33.036 pt3' 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.036 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.294 [2024-11-15 10:40:03.725192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=517eef64-952b-4224-854f-b5e91519b4a5 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 517eef64-952b-4224-854f-b5e91519b4a5 ']' 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.294 [2024-11-15 10:40:03.776827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.294 [2024-11-15 10:40:03.776970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.294 [2024-11-15 10:40:03.777072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.294 [2024-11-15 10:40:03.777155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.294 [2024-11-15 10:40:03.777170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.294 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:33.295 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:33.295 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.295 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:33.295 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.295 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.295 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.295 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.295 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:33.295 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.295 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.553 [2024-11-15 10:40:03.932951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:33.553 [2024-11-15 10:40:03.935247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:33.553 [2024-11-15 10:40:03.935323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:33.553 [2024-11-15 10:40:03.935409] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:33.553 [2024-11-15 10:40:03.935486] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:33.553 [2024-11-15 10:40:03.935520] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:33.553 [2024-11-15 10:40:03.935547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.553 [2024-11-15 10:40:03.935560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:33.553 request: 00:11:33.553 { 00:11:33.553 "name": "raid_bdev1", 00:11:33.553 "raid_level": "concat", 00:11:33.553 "base_bdevs": [ 00:11:33.553 "malloc1", 00:11:33.553 "malloc2", 00:11:33.553 "malloc3" 00:11:33.553 ], 00:11:33.553 "strip_size_kb": 64, 00:11:33.553 "superblock": false, 00:11:33.553 "method": "bdev_raid_create", 00:11:33.553 "req_id": 1 00:11:33.553 } 00:11:33.553 Got JSON-RPC error response 00:11:33.553 response: 00:11:33.553 { 00:11:33.553 "code": -17, 00:11:33.553 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:33.553 } 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:33.553 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.554 [2024-11-15 10:40:03.992923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:33.554 [2024-11-15 10:40:03.993125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.554 [2024-11-15 10:40:03.993297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:33.554 [2024-11-15 10:40:03.993464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.554 [2024-11-15 10:40:03.996305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.554 [2024-11-15 10:40:03.996481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:33.554 [2024-11-15 10:40:03.996706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:33.554 [2024-11-15 10:40:03.996890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:33.554 pt1 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.554 10:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.554 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.554 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.554 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.554 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.554 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.554 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.554 "name": "raid_bdev1", 00:11:33.554 "uuid": "517eef64-952b-4224-854f-b5e91519b4a5", 00:11:33.554 "strip_size_kb": 64, 00:11:33.554 "state": "configuring", 00:11:33.554 "raid_level": "concat", 00:11:33.554 "superblock": true, 00:11:33.554 "num_base_bdevs": 3, 00:11:33.554 "num_base_bdevs_discovered": 1, 00:11:33.554 "num_base_bdevs_operational": 3, 00:11:33.554 "base_bdevs_list": [ 00:11:33.554 { 00:11:33.554 "name": "pt1", 00:11:33.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.554 "is_configured": true, 00:11:33.554 "data_offset": 2048, 00:11:33.554 "data_size": 63488 00:11:33.554 }, 00:11:33.554 { 00:11:33.554 "name": null, 00:11:33.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.554 "is_configured": false, 00:11:33.554 "data_offset": 2048, 00:11:33.554 "data_size": 63488 00:11:33.554 }, 00:11:33.554 { 00:11:33.554 "name": null, 00:11:33.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.554 "is_configured": false, 00:11:33.554 "data_offset": 2048, 00:11:33.554 "data_size": 63488 00:11:33.554 } 00:11:33.554 ] 00:11:33.554 }' 00:11:33.554 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.554 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.122 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:34.122 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:34.122 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.122 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.122 [2024-11-15 10:40:04.481381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:34.122 [2024-11-15 10:40:04.481602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.122 [2024-11-15 10:40:04.481653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:34.122 [2024-11-15 10:40:04.481669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.122 [2024-11-15 10:40:04.482190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.122 [2024-11-15 10:40:04.482224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:34.122 [2024-11-15 10:40:04.482332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:34.123 [2024-11-15 10:40:04.482392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:34.123 pt2 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.123 [2024-11-15 10:40:04.489378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.123 "name": "raid_bdev1", 00:11:34.123 "uuid": "517eef64-952b-4224-854f-b5e91519b4a5", 00:11:34.123 "strip_size_kb": 64, 00:11:34.123 "state": "configuring", 00:11:34.123 "raid_level": "concat", 00:11:34.123 "superblock": true, 00:11:34.123 "num_base_bdevs": 3, 00:11:34.123 "num_base_bdevs_discovered": 1, 00:11:34.123 "num_base_bdevs_operational": 3, 00:11:34.123 "base_bdevs_list": [ 00:11:34.123 { 00:11:34.123 "name": "pt1", 00:11:34.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.123 "is_configured": true, 00:11:34.123 "data_offset": 2048, 00:11:34.123 "data_size": 63488 00:11:34.123 }, 00:11:34.123 { 00:11:34.123 "name": null, 00:11:34.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.123 "is_configured": false, 00:11:34.123 "data_offset": 0, 00:11:34.123 "data_size": 63488 00:11:34.123 }, 00:11:34.123 { 00:11:34.123 "name": null, 00:11:34.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.123 "is_configured": false, 00:11:34.123 "data_offset": 2048, 00:11:34.123 "data_size": 63488 00:11:34.123 } 00:11:34.123 ] 00:11:34.123 }' 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.123 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.692 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:34.692 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.692 10:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:34.692 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.692 10:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.692 [2024-11-15 10:40:04.997482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:34.692 [2024-11-15 10:40:04.997575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.692 [2024-11-15 10:40:04.997603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:34.692 [2024-11-15 10:40:04.997619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.692 [2024-11-15 10:40:04.998169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.692 [2024-11-15 10:40:04.998201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:34.692 [2024-11-15 10:40:04.998297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:34.692 [2024-11-15 10:40:04.998334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:34.692 pt2 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.692 [2024-11-15 10:40:05.005462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:34.692 [2024-11-15 10:40:05.005523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.692 [2024-11-15 10:40:05.005545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:34.692 [2024-11-15 10:40:05.005560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.692 [2024-11-15 10:40:05.005992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.692 [2024-11-15 10:40:05.006045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:34.692 [2024-11-15 10:40:05.006120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:34.692 [2024-11-15 10:40:05.006151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:34.692 [2024-11-15 10:40:05.006296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:34.692 [2024-11-15 10:40:05.006316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:34.692 [2024-11-15 10:40:05.006647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:34.692 [2024-11-15 10:40:05.006840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:34.692 [2024-11-15 10:40:05.006862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:34.692 [2024-11-15 10:40:05.007027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.692 pt3 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.692 "name": "raid_bdev1", 00:11:34.692 "uuid": "517eef64-952b-4224-854f-b5e91519b4a5", 00:11:34.692 "strip_size_kb": 64, 00:11:34.692 "state": "online", 00:11:34.692 "raid_level": "concat", 00:11:34.692 "superblock": true, 00:11:34.692 "num_base_bdevs": 3, 00:11:34.692 "num_base_bdevs_discovered": 3, 00:11:34.692 "num_base_bdevs_operational": 3, 00:11:34.692 "base_bdevs_list": [ 00:11:34.692 { 00:11:34.692 "name": "pt1", 00:11:34.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.692 "is_configured": true, 00:11:34.692 "data_offset": 2048, 00:11:34.692 "data_size": 63488 00:11:34.692 }, 00:11:34.692 { 00:11:34.692 "name": "pt2", 00:11:34.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.692 "is_configured": true, 00:11:34.692 "data_offset": 2048, 00:11:34.692 "data_size": 63488 00:11:34.692 }, 00:11:34.692 { 00:11:34.692 "name": "pt3", 00:11:34.692 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.692 "is_configured": true, 00:11:34.692 "data_offset": 2048, 00:11:34.692 "data_size": 63488 00:11:34.692 } 00:11:34.692 ] 00:11:34.692 }' 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.692 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.950 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:34.950 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:34.950 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.950 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.951 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.951 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.951 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.951 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.951 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.951 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.951 [2024-11-15 10:40:05.485998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.951 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.209 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.209 "name": "raid_bdev1", 00:11:35.209 "aliases": [ 00:11:35.209 "517eef64-952b-4224-854f-b5e91519b4a5" 00:11:35.209 ], 00:11:35.209 "product_name": "Raid Volume", 00:11:35.209 "block_size": 512, 00:11:35.209 "num_blocks": 190464, 00:11:35.209 "uuid": "517eef64-952b-4224-854f-b5e91519b4a5", 00:11:35.209 "assigned_rate_limits": { 00:11:35.209 "rw_ios_per_sec": 0, 00:11:35.209 "rw_mbytes_per_sec": 0, 00:11:35.209 "r_mbytes_per_sec": 0, 00:11:35.209 "w_mbytes_per_sec": 0 00:11:35.209 }, 00:11:35.209 "claimed": false, 00:11:35.209 "zoned": false, 00:11:35.209 "supported_io_types": { 00:11:35.209 "read": true, 00:11:35.209 "write": true, 00:11:35.209 "unmap": true, 00:11:35.209 "flush": true, 00:11:35.209 "reset": true, 00:11:35.209 "nvme_admin": false, 00:11:35.209 "nvme_io": false, 00:11:35.209 "nvme_io_md": false, 00:11:35.209 "write_zeroes": true, 00:11:35.209 "zcopy": false, 00:11:35.209 "get_zone_info": false, 00:11:35.209 "zone_management": false, 00:11:35.209 "zone_append": false, 00:11:35.209 "compare": false, 00:11:35.209 "compare_and_write": false, 00:11:35.209 "abort": false, 00:11:35.209 "seek_hole": false, 00:11:35.209 "seek_data": false, 00:11:35.209 "copy": false, 00:11:35.209 "nvme_iov_md": false 00:11:35.209 }, 00:11:35.209 "memory_domains": [ 00:11:35.209 { 00:11:35.209 "dma_device_id": "system", 00:11:35.209 "dma_device_type": 1 00:11:35.209 }, 00:11:35.209 { 00:11:35.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.209 "dma_device_type": 2 00:11:35.209 }, 00:11:35.209 { 00:11:35.209 "dma_device_id": "system", 00:11:35.209 "dma_device_type": 1 00:11:35.209 }, 00:11:35.209 { 00:11:35.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.210 "dma_device_type": 2 00:11:35.210 }, 00:11:35.210 { 00:11:35.210 "dma_device_id": "system", 00:11:35.210 "dma_device_type": 1 00:11:35.210 }, 00:11:35.210 { 00:11:35.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.210 "dma_device_type": 2 00:11:35.210 } 00:11:35.210 ], 00:11:35.210 "driver_specific": { 00:11:35.210 "raid": { 00:11:35.210 "uuid": "517eef64-952b-4224-854f-b5e91519b4a5", 00:11:35.210 "strip_size_kb": 64, 00:11:35.210 "state": "online", 00:11:35.210 "raid_level": "concat", 00:11:35.210 "superblock": true, 00:11:35.210 "num_base_bdevs": 3, 00:11:35.210 "num_base_bdevs_discovered": 3, 00:11:35.210 "num_base_bdevs_operational": 3, 00:11:35.210 "base_bdevs_list": [ 00:11:35.210 { 00:11:35.210 "name": "pt1", 00:11:35.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.210 "is_configured": true, 00:11:35.210 "data_offset": 2048, 00:11:35.210 "data_size": 63488 00:11:35.210 }, 00:11:35.210 { 00:11:35.210 "name": "pt2", 00:11:35.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.210 "is_configured": true, 00:11:35.210 "data_offset": 2048, 00:11:35.210 "data_size": 63488 00:11:35.210 }, 00:11:35.210 { 00:11:35.210 "name": "pt3", 00:11:35.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.210 "is_configured": true, 00:11:35.210 "data_offset": 2048, 00:11:35.210 "data_size": 63488 00:11:35.210 } 00:11:35.210 ] 00:11:35.210 } 00:11:35.210 } 00:11:35.210 }' 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:35.210 pt2 00:11:35.210 pt3' 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.210 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:35.468 [2024-11-15 10:40:05.794013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 517eef64-952b-4224-854f-b5e91519b4a5 '!=' 517eef64-952b-4224-854f-b5e91519b4a5 ']' 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67097 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 67097 ']' 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 67097 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67097 00:11:35.468 killing process with pid 67097 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67097' 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 67097 00:11:35.468 10:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 67097 00:11:35.468 [2024-11-15 10:40:05.871887] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.468 [2024-11-15 10:40:05.872086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.468 [2024-11-15 10:40:05.872282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.468 [2024-11-15 10:40:05.872323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:35.727 [2024-11-15 10:40:06.149679] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.660 10:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:36.660 00:11:36.660 real 0m5.500s 00:11:36.660 user 0m8.343s 00:11:36.660 sys 0m0.729s 00:11:36.660 10:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:36.660 10:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.660 ************************************ 00:11:36.660 END TEST raid_superblock_test 00:11:36.660 ************************************ 00:11:36.660 10:40:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:36.660 10:40:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:36.660 10:40:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.660 10:40:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.660 ************************************ 00:11:36.660 START TEST raid_read_error_test 00:11:36.660 ************************************ 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.F713PALkNR 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67351 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67351 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67351 ']' 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:36.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:36.660 10:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.918 [2024-11-15 10:40:07.310926] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:11:36.918 [2024-11-15 10:40:07.311185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67351 ] 00:11:37.176 [2024-11-15 10:40:07.529149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.176 [2024-11-15 10:40:07.655387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.434 [2024-11-15 10:40:07.875295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.434 [2024-11-15 10:40:07.875390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 BaseBdev1_malloc 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 true 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 [2024-11-15 10:40:08.379789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:38.000 [2024-11-15 10:40:08.379860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.000 [2024-11-15 10:40:08.379891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:38.000 [2024-11-15 10:40:08.379910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.000 [2024-11-15 10:40:08.382546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.000 [2024-11-15 10:40:08.382600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:38.000 BaseBdev1 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 BaseBdev2_malloc 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 true 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 [2024-11-15 10:40:08.431594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:38.000 [2024-11-15 10:40:08.431666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.000 [2024-11-15 10:40:08.431693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:38.000 [2024-11-15 10:40:08.431711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.000 [2024-11-15 10:40:08.434293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.000 [2024-11-15 10:40:08.434366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:38.000 BaseBdev2 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 BaseBdev3_malloc 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 true 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.000 [2024-11-15 10:40:08.489940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:38.000 [2024-11-15 10:40:08.490010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.000 [2024-11-15 10:40:08.490038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:38.000 [2024-11-15 10:40:08.490056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.000 [2024-11-15 10:40:08.492754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.000 [2024-11-15 10:40:08.492814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:38.000 BaseBdev3 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.000 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.001 [2024-11-15 10:40:08.498051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.001 [2024-11-15 10:40:08.500300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.001 [2024-11-15 10:40:08.500444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.001 [2024-11-15 10:40:08.500722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:38.001 [2024-11-15 10:40:08.500752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:38.001 [2024-11-15 10:40:08.501064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:38.001 [2024-11-15 10:40:08.501294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:38.001 [2024-11-15 10:40:08.501336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:38.001 [2024-11-15 10:40:08.501554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.001 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.258 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.258 "name": "raid_bdev1", 00:11:38.258 "uuid": "718ae21c-c791-41ff-b47d-68a18e5fd3ab", 00:11:38.258 "strip_size_kb": 64, 00:11:38.258 "state": "online", 00:11:38.258 "raid_level": "concat", 00:11:38.258 "superblock": true, 00:11:38.258 "num_base_bdevs": 3, 00:11:38.258 "num_base_bdevs_discovered": 3, 00:11:38.258 "num_base_bdevs_operational": 3, 00:11:38.258 "base_bdevs_list": [ 00:11:38.258 { 00:11:38.258 "name": "BaseBdev1", 00:11:38.258 "uuid": "a925aae1-e646-53b6-a5c4-263cc227bd5c", 00:11:38.258 "is_configured": true, 00:11:38.258 "data_offset": 2048, 00:11:38.258 "data_size": 63488 00:11:38.258 }, 00:11:38.258 { 00:11:38.258 "name": "BaseBdev2", 00:11:38.258 "uuid": "5e84c3f3-1bf9-5342-90c6-d6c9e6a4259a", 00:11:38.258 "is_configured": true, 00:11:38.258 "data_offset": 2048, 00:11:38.258 "data_size": 63488 00:11:38.258 }, 00:11:38.258 { 00:11:38.258 "name": "BaseBdev3", 00:11:38.258 "uuid": "ed6a8c2b-8c9f-5bbe-8086-7e6a895958e3", 00:11:38.258 "is_configured": true, 00:11:38.258 "data_offset": 2048, 00:11:38.258 "data_size": 63488 00:11:38.258 } 00:11:38.258 ] 00:11:38.258 }' 00:11:38.258 10:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.258 10:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.517 10:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:38.517 10:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:38.778 [2024-11-15 10:40:09.187512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.711 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.711 "name": "raid_bdev1", 00:11:39.711 "uuid": "718ae21c-c791-41ff-b47d-68a18e5fd3ab", 00:11:39.711 "strip_size_kb": 64, 00:11:39.711 "state": "online", 00:11:39.711 "raid_level": "concat", 00:11:39.711 "superblock": true, 00:11:39.711 "num_base_bdevs": 3, 00:11:39.711 "num_base_bdevs_discovered": 3, 00:11:39.711 "num_base_bdevs_operational": 3, 00:11:39.711 "base_bdevs_list": [ 00:11:39.711 { 00:11:39.711 "name": "BaseBdev1", 00:11:39.711 "uuid": "a925aae1-e646-53b6-a5c4-263cc227bd5c", 00:11:39.711 "is_configured": true, 00:11:39.711 "data_offset": 2048, 00:11:39.711 "data_size": 63488 00:11:39.711 }, 00:11:39.711 { 00:11:39.711 "name": "BaseBdev2", 00:11:39.711 "uuid": "5e84c3f3-1bf9-5342-90c6-d6c9e6a4259a", 00:11:39.711 "is_configured": true, 00:11:39.711 "data_offset": 2048, 00:11:39.711 "data_size": 63488 00:11:39.711 }, 00:11:39.711 { 00:11:39.712 "name": "BaseBdev3", 00:11:39.712 "uuid": "ed6a8c2b-8c9f-5bbe-8086-7e6a895958e3", 00:11:39.712 "is_configured": true, 00:11:39.712 "data_offset": 2048, 00:11:39.712 "data_size": 63488 00:11:39.712 } 00:11:39.712 ] 00:11:39.712 }' 00:11:39.712 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.712 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.277 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:40.277 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.277 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.277 [2024-11-15 10:40:10.609910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.277 [2024-11-15 10:40:10.609950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.277 [2024-11-15 10:40:10.613534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.277 [2024-11-15 10:40:10.613599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.277 [2024-11-15 10:40:10.613654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.278 [2024-11-15 10:40:10.613670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:40.278 { 00:11:40.278 "results": [ 00:11:40.278 { 00:11:40.278 "job": "raid_bdev1", 00:11:40.278 "core_mask": "0x1", 00:11:40.278 "workload": "randrw", 00:11:40.278 "percentage": 50, 00:11:40.278 "status": "finished", 00:11:40.278 "queue_depth": 1, 00:11:40.278 "io_size": 131072, 00:11:40.278 "runtime": 1.420317, 00:11:40.278 "iops": 11131.317867771771, 00:11:40.278 "mibps": 1391.4147334714714, 00:11:40.278 "io_failed": 1, 00:11:40.278 "io_timeout": 0, 00:11:40.278 "avg_latency_us": 123.35790709575038, 00:11:40.278 "min_latency_us": 42.82181818181818, 00:11:40.278 "max_latency_us": 1921.3963636363637 00:11:40.278 } 00:11:40.278 ], 00:11:40.278 "core_count": 1 00:11:40.278 } 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67351 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67351 ']' 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67351 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67351 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:40.278 killing process with pid 67351 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67351' 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67351 00:11:40.278 [2024-11-15 10:40:10.647270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.278 10:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67351 00:11:40.535 [2024-11-15 10:40:10.839511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.467 10:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.F713PALkNR 00:11:41.467 10:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:41.467 10:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:41.467 10:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:41.467 10:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:41.467 10:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:41.467 10:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:41.467 10:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:41.467 00:11:41.467 real 0m4.706s 00:11:41.467 user 0m6.031s 00:11:41.467 sys 0m0.495s 00:11:41.467 10:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:41.467 10:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.467 ************************************ 00:11:41.467 END TEST raid_read_error_test 00:11:41.467 ************************************ 00:11:41.467 10:40:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:41.467 10:40:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:41.467 10:40:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:41.467 10:40:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.467 ************************************ 00:11:41.467 START TEST raid_write_error_test 00:11:41.467 ************************************ 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NXcplsAGSB 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67501 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67501 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67501 ']' 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:41.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.467 10:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:41.725 [2024-11-15 10:40:12.026790] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:11:41.725 [2024-11-15 10:40:12.026957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67501 ] 00:11:41.725 [2024-11-15 10:40:12.257406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.983 [2024-11-15 10:40:12.364112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.240 [2024-11-15 10:40:12.543654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.240 [2024-11-15 10:40:12.543711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.807 BaseBdev1_malloc 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.807 true 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.807 [2024-11-15 10:40:13.113558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:42.807 [2024-11-15 10:40:13.113646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.807 [2024-11-15 10:40:13.113680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:42.807 [2024-11-15 10:40:13.113699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.807 [2024-11-15 10:40:13.116823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.807 [2024-11-15 10:40:13.116895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:42.807 BaseBdev1 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.807 BaseBdev2_malloc 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.807 true 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.807 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.807 [2024-11-15 10:40:13.172256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:42.808 [2024-11-15 10:40:13.172328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.808 [2024-11-15 10:40:13.172369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:42.808 [2024-11-15 10:40:13.172391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.808 [2024-11-15 10:40:13.175003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.808 [2024-11-15 10:40:13.175073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:42.808 BaseBdev2 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.808 BaseBdev3_malloc 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.808 true 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.808 [2024-11-15 10:40:13.232595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:42.808 [2024-11-15 10:40:13.232666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.808 [2024-11-15 10:40:13.232695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:42.808 [2024-11-15 10:40:13.232720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.808 [2024-11-15 10:40:13.235422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.808 [2024-11-15 10:40:13.235473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:42.808 BaseBdev3 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.808 [2024-11-15 10:40:13.240708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.808 [2024-11-15 10:40:13.242977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.808 [2024-11-15 10:40:13.243115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.808 [2024-11-15 10:40:13.243427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:42.808 [2024-11-15 10:40:13.243457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:42.808 [2024-11-15 10:40:13.243778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:42.808 [2024-11-15 10:40:13.244023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:42.808 [2024-11-15 10:40:13.244057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:42.808 [2024-11-15 10:40:13.244254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.808 "name": "raid_bdev1", 00:11:42.808 "uuid": "bd116c5a-ed9c-49e5-b086-f64967fa1a36", 00:11:42.808 "strip_size_kb": 64, 00:11:42.808 "state": "online", 00:11:42.808 "raid_level": "concat", 00:11:42.808 "superblock": true, 00:11:42.808 "num_base_bdevs": 3, 00:11:42.808 "num_base_bdevs_discovered": 3, 00:11:42.808 "num_base_bdevs_operational": 3, 00:11:42.808 "base_bdevs_list": [ 00:11:42.808 { 00:11:42.808 "name": "BaseBdev1", 00:11:42.808 "uuid": "86e74bf8-7f07-5989-b8f3-5b58c57fb7aa", 00:11:42.808 "is_configured": true, 00:11:42.808 "data_offset": 2048, 00:11:42.808 "data_size": 63488 00:11:42.808 }, 00:11:42.808 { 00:11:42.808 "name": "BaseBdev2", 00:11:42.808 "uuid": "f3cc2ce7-679a-57ce-b078-720f8ca354b0", 00:11:42.808 "is_configured": true, 00:11:42.808 "data_offset": 2048, 00:11:42.808 "data_size": 63488 00:11:42.808 }, 00:11:42.808 { 00:11:42.808 "name": "BaseBdev3", 00:11:42.808 "uuid": "b61c60e4-9d47-5bf7-9b17-c287fd7f958f", 00:11:42.808 "is_configured": true, 00:11:42.808 "data_offset": 2048, 00:11:42.808 "data_size": 63488 00:11:42.808 } 00:11:42.808 ] 00:11:42.808 }' 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.808 10:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.373 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:43.373 10:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:43.631 [2024-11-15 10:40:13.966177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.564 "name": "raid_bdev1", 00:11:44.564 "uuid": "bd116c5a-ed9c-49e5-b086-f64967fa1a36", 00:11:44.564 "strip_size_kb": 64, 00:11:44.564 "state": "online", 00:11:44.564 "raid_level": "concat", 00:11:44.564 "superblock": true, 00:11:44.564 "num_base_bdevs": 3, 00:11:44.564 "num_base_bdevs_discovered": 3, 00:11:44.564 "num_base_bdevs_operational": 3, 00:11:44.564 "base_bdevs_list": [ 00:11:44.564 { 00:11:44.564 "name": "BaseBdev1", 00:11:44.564 "uuid": "86e74bf8-7f07-5989-b8f3-5b58c57fb7aa", 00:11:44.564 "is_configured": true, 00:11:44.564 "data_offset": 2048, 00:11:44.564 "data_size": 63488 00:11:44.564 }, 00:11:44.564 { 00:11:44.564 "name": "BaseBdev2", 00:11:44.564 "uuid": "f3cc2ce7-679a-57ce-b078-720f8ca354b0", 00:11:44.564 "is_configured": true, 00:11:44.564 "data_offset": 2048, 00:11:44.564 "data_size": 63488 00:11:44.564 }, 00:11:44.564 { 00:11:44.564 "name": "BaseBdev3", 00:11:44.564 "uuid": "b61c60e4-9d47-5bf7-9b17-c287fd7f958f", 00:11:44.564 "is_configured": true, 00:11:44.564 "data_offset": 2048, 00:11:44.564 "data_size": 63488 00:11:44.564 } 00:11:44.564 ] 00:11:44.564 }' 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.564 10:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.822 10:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:44.823 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.823 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.823 [2024-11-15 10:40:15.368592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:44.823 [2024-11-15 10:40:15.368632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.823 [2024-11-15 10:40:15.372191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.823 [2024-11-15 10:40:15.372264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.823 [2024-11-15 10:40:15.372319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.823 [2024-11-15 10:40:15.372334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:44.823 { 00:11:44.823 "results": [ 00:11:44.823 { 00:11:44.823 "job": "raid_bdev1", 00:11:44.823 "core_mask": "0x1", 00:11:44.823 "workload": "randrw", 00:11:44.823 "percentage": 50, 00:11:44.823 "status": "finished", 00:11:44.823 "queue_depth": 1, 00:11:44.823 "io_size": 131072, 00:11:44.823 "runtime": 1.400137, 00:11:44.823 "iops": 11103.199186936707, 00:11:44.823 "mibps": 1387.8998983670883, 00:11:44.823 "io_failed": 1, 00:11:44.823 "io_timeout": 0, 00:11:44.823 "avg_latency_us": 123.1213420887982, 00:11:44.823 "min_latency_us": 42.589090909090906, 00:11:44.823 "max_latency_us": 1899.0545454545454 00:11:44.823 } 00:11:44.823 ], 00:11:44.823 "core_count": 1 00:11:44.823 } 00:11:44.823 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.823 10:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67501 00:11:44.823 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67501 ']' 00:11:44.823 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67501 00:11:44.823 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:44.823 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:44.823 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67501 00:11:45.081 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:45.081 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:45.081 killing process with pid 67501 00:11:45.081 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67501' 00:11:45.081 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67501 00:11:45.081 [2024-11-15 10:40:15.401618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:45.081 10:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67501 00:11:45.081 [2024-11-15 10:40:15.599751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.455 10:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NXcplsAGSB 00:11:46.455 10:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:46.455 10:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:46.455 10:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:46.455 10:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:46.455 10:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.455 10:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:46.455 10:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:46.455 00:11:46.455 real 0m4.779s 00:11:46.455 user 0m6.161s 00:11:46.455 sys 0m0.451s 00:11:46.455 10:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:46.455 10:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.455 ************************************ 00:11:46.455 END TEST raid_write_error_test 00:11:46.455 ************************************ 00:11:46.455 10:40:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:46.455 10:40:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:46.455 10:40:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:46.455 10:40:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:46.455 10:40:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.455 ************************************ 00:11:46.455 START TEST raid_state_function_test 00:11:46.455 ************************************ 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67645 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:46.455 Process raid pid: 67645 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67645' 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67645 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67645 ']' 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:46.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:46.455 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.455 [2024-11-15 10:40:16.869736] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:11:46.455 [2024-11-15 10:40:16.869927] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.713 [2024-11-15 10:40:17.049316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.713 [2024-11-15 10:40:17.153807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.970 [2024-11-15 10:40:17.339093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.970 [2024-11-15 10:40:17.339170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.647 [2024-11-15 10:40:17.883729] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.647 [2024-11-15 10:40:17.883817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.647 [2024-11-15 10:40:17.883847] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.647 [2024-11-15 10:40:17.883879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.647 [2024-11-15 10:40:17.883909] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:47.647 [2024-11-15 10:40:17.883939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.647 "name": "Existed_Raid", 00:11:47.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.647 "strip_size_kb": 0, 00:11:47.647 "state": "configuring", 00:11:47.647 "raid_level": "raid1", 00:11:47.647 "superblock": false, 00:11:47.647 "num_base_bdevs": 3, 00:11:47.647 "num_base_bdevs_discovered": 0, 00:11:47.647 "num_base_bdevs_operational": 3, 00:11:47.647 "base_bdevs_list": [ 00:11:47.647 { 00:11:47.647 "name": "BaseBdev1", 00:11:47.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.647 "is_configured": false, 00:11:47.647 "data_offset": 0, 00:11:47.647 "data_size": 0 00:11:47.647 }, 00:11:47.647 { 00:11:47.647 "name": "BaseBdev2", 00:11:47.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.647 "is_configured": false, 00:11:47.647 "data_offset": 0, 00:11:47.647 "data_size": 0 00:11:47.647 }, 00:11:47.647 { 00:11:47.647 "name": "BaseBdev3", 00:11:47.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.647 "is_configured": false, 00:11:47.647 "data_offset": 0, 00:11:47.647 "data_size": 0 00:11:47.647 } 00:11:47.647 ] 00:11:47.647 }' 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.647 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.906 [2024-11-15 10:40:18.403811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:47.906 [2024-11-15 10:40:18.403868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.906 [2024-11-15 10:40:18.411795] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.906 [2024-11-15 10:40:18.411870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.906 [2024-11-15 10:40:18.411898] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.906 [2024-11-15 10:40:18.411929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.906 [2024-11-15 10:40:18.411947] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:47.906 [2024-11-15 10:40:18.411976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.906 [2024-11-15 10:40:18.452333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.906 BaseBdev1 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.906 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.164 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.164 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:48.164 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.164 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.164 [ 00:11:48.164 { 00:11:48.164 "name": "BaseBdev1", 00:11:48.164 "aliases": [ 00:11:48.164 "1a253111-aa77-46db-865a-3cd7b097d328" 00:11:48.164 ], 00:11:48.164 "product_name": "Malloc disk", 00:11:48.164 "block_size": 512, 00:11:48.164 "num_blocks": 65536, 00:11:48.164 "uuid": "1a253111-aa77-46db-865a-3cd7b097d328", 00:11:48.164 "assigned_rate_limits": { 00:11:48.164 "rw_ios_per_sec": 0, 00:11:48.164 "rw_mbytes_per_sec": 0, 00:11:48.164 "r_mbytes_per_sec": 0, 00:11:48.164 "w_mbytes_per_sec": 0 00:11:48.164 }, 00:11:48.164 "claimed": true, 00:11:48.164 "claim_type": "exclusive_write", 00:11:48.164 "zoned": false, 00:11:48.164 "supported_io_types": { 00:11:48.164 "read": true, 00:11:48.164 "write": true, 00:11:48.165 "unmap": true, 00:11:48.165 "flush": true, 00:11:48.165 "reset": true, 00:11:48.165 "nvme_admin": false, 00:11:48.165 "nvme_io": false, 00:11:48.165 "nvme_io_md": false, 00:11:48.165 "write_zeroes": true, 00:11:48.165 "zcopy": true, 00:11:48.165 "get_zone_info": false, 00:11:48.165 "zone_management": false, 00:11:48.165 "zone_append": false, 00:11:48.165 "compare": false, 00:11:48.165 "compare_and_write": false, 00:11:48.165 "abort": true, 00:11:48.165 "seek_hole": false, 00:11:48.165 "seek_data": false, 00:11:48.165 "copy": true, 00:11:48.165 "nvme_iov_md": false 00:11:48.165 }, 00:11:48.165 "memory_domains": [ 00:11:48.165 { 00:11:48.165 "dma_device_id": "system", 00:11:48.165 "dma_device_type": 1 00:11:48.165 }, 00:11:48.165 { 00:11:48.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.165 "dma_device_type": 2 00:11:48.165 } 00:11:48.165 ], 00:11:48.165 "driver_specific": {} 00:11:48.165 } 00:11:48.165 ] 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.165 "name": "Existed_Raid", 00:11:48.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.165 "strip_size_kb": 0, 00:11:48.165 "state": "configuring", 00:11:48.165 "raid_level": "raid1", 00:11:48.165 "superblock": false, 00:11:48.165 "num_base_bdevs": 3, 00:11:48.165 "num_base_bdevs_discovered": 1, 00:11:48.165 "num_base_bdevs_operational": 3, 00:11:48.165 "base_bdevs_list": [ 00:11:48.165 { 00:11:48.165 "name": "BaseBdev1", 00:11:48.165 "uuid": "1a253111-aa77-46db-865a-3cd7b097d328", 00:11:48.165 "is_configured": true, 00:11:48.165 "data_offset": 0, 00:11:48.165 "data_size": 65536 00:11:48.165 }, 00:11:48.165 { 00:11:48.165 "name": "BaseBdev2", 00:11:48.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.165 "is_configured": false, 00:11:48.165 "data_offset": 0, 00:11:48.165 "data_size": 0 00:11:48.165 }, 00:11:48.165 { 00:11:48.165 "name": "BaseBdev3", 00:11:48.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.165 "is_configured": false, 00:11:48.165 "data_offset": 0, 00:11:48.165 "data_size": 0 00:11:48.165 } 00:11:48.165 ] 00:11:48.165 }' 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.165 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.733 [2024-11-15 10:40:19.016571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:48.733 [2024-11-15 10:40:19.016652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.733 [2024-11-15 10:40:19.024600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.733 [2024-11-15 10:40:19.026943] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.733 [2024-11-15 10:40:19.027015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.733 [2024-11-15 10:40:19.027055] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.733 [2024-11-15 10:40:19.027088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.733 "name": "Existed_Raid", 00:11:48.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.733 "strip_size_kb": 0, 00:11:48.733 "state": "configuring", 00:11:48.733 "raid_level": "raid1", 00:11:48.733 "superblock": false, 00:11:48.733 "num_base_bdevs": 3, 00:11:48.733 "num_base_bdevs_discovered": 1, 00:11:48.733 "num_base_bdevs_operational": 3, 00:11:48.733 "base_bdevs_list": [ 00:11:48.733 { 00:11:48.733 "name": "BaseBdev1", 00:11:48.733 "uuid": "1a253111-aa77-46db-865a-3cd7b097d328", 00:11:48.733 "is_configured": true, 00:11:48.733 "data_offset": 0, 00:11:48.733 "data_size": 65536 00:11:48.733 }, 00:11:48.733 { 00:11:48.733 "name": "BaseBdev2", 00:11:48.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.733 "is_configured": false, 00:11:48.733 "data_offset": 0, 00:11:48.733 "data_size": 0 00:11:48.733 }, 00:11:48.733 { 00:11:48.733 "name": "BaseBdev3", 00:11:48.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.733 "is_configured": false, 00:11:48.733 "data_offset": 0, 00:11:48.733 "data_size": 0 00:11:48.733 } 00:11:48.733 ] 00:11:48.733 }' 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.733 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.992 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:48.992 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.992 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.250 [2024-11-15 10:40:19.551044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.250 BaseBdev2 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.250 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.251 [ 00:11:49.251 { 00:11:49.251 "name": "BaseBdev2", 00:11:49.251 "aliases": [ 00:11:49.251 "00541e87-a638-4b29-92ac-f543b4381559" 00:11:49.251 ], 00:11:49.251 "product_name": "Malloc disk", 00:11:49.251 "block_size": 512, 00:11:49.251 "num_blocks": 65536, 00:11:49.251 "uuid": "00541e87-a638-4b29-92ac-f543b4381559", 00:11:49.251 "assigned_rate_limits": { 00:11:49.251 "rw_ios_per_sec": 0, 00:11:49.251 "rw_mbytes_per_sec": 0, 00:11:49.251 "r_mbytes_per_sec": 0, 00:11:49.251 "w_mbytes_per_sec": 0 00:11:49.251 }, 00:11:49.251 "claimed": true, 00:11:49.251 "claim_type": "exclusive_write", 00:11:49.251 "zoned": false, 00:11:49.251 "supported_io_types": { 00:11:49.251 "read": true, 00:11:49.251 "write": true, 00:11:49.251 "unmap": true, 00:11:49.251 "flush": true, 00:11:49.251 "reset": true, 00:11:49.251 "nvme_admin": false, 00:11:49.251 "nvme_io": false, 00:11:49.251 "nvme_io_md": false, 00:11:49.251 "write_zeroes": true, 00:11:49.251 "zcopy": true, 00:11:49.251 "get_zone_info": false, 00:11:49.251 "zone_management": false, 00:11:49.251 "zone_append": false, 00:11:49.251 "compare": false, 00:11:49.251 "compare_and_write": false, 00:11:49.251 "abort": true, 00:11:49.251 "seek_hole": false, 00:11:49.251 "seek_data": false, 00:11:49.251 "copy": true, 00:11:49.251 "nvme_iov_md": false 00:11:49.251 }, 00:11:49.251 "memory_domains": [ 00:11:49.251 { 00:11:49.251 "dma_device_id": "system", 00:11:49.251 "dma_device_type": 1 00:11:49.251 }, 00:11:49.251 { 00:11:49.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.251 "dma_device_type": 2 00:11:49.251 } 00:11:49.251 ], 00:11:49.251 "driver_specific": {} 00:11:49.251 } 00:11:49.251 ] 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.251 "name": "Existed_Raid", 00:11:49.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.251 "strip_size_kb": 0, 00:11:49.251 "state": "configuring", 00:11:49.251 "raid_level": "raid1", 00:11:49.251 "superblock": false, 00:11:49.251 "num_base_bdevs": 3, 00:11:49.251 "num_base_bdevs_discovered": 2, 00:11:49.251 "num_base_bdevs_operational": 3, 00:11:49.251 "base_bdevs_list": [ 00:11:49.251 { 00:11:49.251 "name": "BaseBdev1", 00:11:49.251 "uuid": "1a253111-aa77-46db-865a-3cd7b097d328", 00:11:49.251 "is_configured": true, 00:11:49.251 "data_offset": 0, 00:11:49.251 "data_size": 65536 00:11:49.251 }, 00:11:49.251 { 00:11:49.251 "name": "BaseBdev2", 00:11:49.251 "uuid": "00541e87-a638-4b29-92ac-f543b4381559", 00:11:49.251 "is_configured": true, 00:11:49.251 "data_offset": 0, 00:11:49.251 "data_size": 65536 00:11:49.251 }, 00:11:49.251 { 00:11:49.251 "name": "BaseBdev3", 00:11:49.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.251 "is_configured": false, 00:11:49.251 "data_offset": 0, 00:11:49.251 "data_size": 0 00:11:49.251 } 00:11:49.251 ] 00:11:49.251 }' 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.251 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.535 [2024-11-15 10:40:20.070060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.535 [2024-11-15 10:40:20.070139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:49.535 [2024-11-15 10:40:20.070162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:49.535 [2024-11-15 10:40:20.070519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:49.535 [2024-11-15 10:40:20.070766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:49.535 [2024-11-15 10:40:20.070794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:49.535 [2024-11-15 10:40:20.071114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.535 BaseBdev3 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.535 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.535 [ 00:11:49.535 { 00:11:49.535 "name": "BaseBdev3", 00:11:49.535 "aliases": [ 00:11:49.794 "6e72bc30-976d-470b-b2ba-6fb61696a176" 00:11:49.794 ], 00:11:49.794 "product_name": "Malloc disk", 00:11:49.794 "block_size": 512, 00:11:49.794 "num_blocks": 65536, 00:11:49.794 "uuid": "6e72bc30-976d-470b-b2ba-6fb61696a176", 00:11:49.794 "assigned_rate_limits": { 00:11:49.794 "rw_ios_per_sec": 0, 00:11:49.794 "rw_mbytes_per_sec": 0, 00:11:49.794 "r_mbytes_per_sec": 0, 00:11:49.794 "w_mbytes_per_sec": 0 00:11:49.794 }, 00:11:49.794 "claimed": true, 00:11:49.794 "claim_type": "exclusive_write", 00:11:49.794 "zoned": false, 00:11:49.794 "supported_io_types": { 00:11:49.794 "read": true, 00:11:49.794 "write": true, 00:11:49.794 "unmap": true, 00:11:49.794 "flush": true, 00:11:49.794 "reset": true, 00:11:49.794 "nvme_admin": false, 00:11:49.794 "nvme_io": false, 00:11:49.794 "nvme_io_md": false, 00:11:49.794 "write_zeroes": true, 00:11:49.794 "zcopy": true, 00:11:49.794 "get_zone_info": false, 00:11:49.794 "zone_management": false, 00:11:49.794 "zone_append": false, 00:11:49.794 "compare": false, 00:11:49.794 "compare_and_write": false, 00:11:49.794 "abort": true, 00:11:49.794 "seek_hole": false, 00:11:49.794 "seek_data": false, 00:11:49.794 "copy": true, 00:11:49.794 "nvme_iov_md": false 00:11:49.794 }, 00:11:49.794 "memory_domains": [ 00:11:49.794 { 00:11:49.794 "dma_device_id": "system", 00:11:49.794 "dma_device_type": 1 00:11:49.794 }, 00:11:49.794 { 00:11:49.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.794 "dma_device_type": 2 00:11:49.794 } 00:11:49.794 ], 00:11:49.794 "driver_specific": {} 00:11:49.794 } 00:11:49.794 ] 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.794 "name": "Existed_Raid", 00:11:49.794 "uuid": "d3ee1e8f-86cb-406d-a865-da6bd08ecdda", 00:11:49.794 "strip_size_kb": 0, 00:11:49.794 "state": "online", 00:11:49.794 "raid_level": "raid1", 00:11:49.794 "superblock": false, 00:11:49.794 "num_base_bdevs": 3, 00:11:49.794 "num_base_bdevs_discovered": 3, 00:11:49.794 "num_base_bdevs_operational": 3, 00:11:49.794 "base_bdevs_list": [ 00:11:49.794 { 00:11:49.794 "name": "BaseBdev1", 00:11:49.794 "uuid": "1a253111-aa77-46db-865a-3cd7b097d328", 00:11:49.794 "is_configured": true, 00:11:49.794 "data_offset": 0, 00:11:49.794 "data_size": 65536 00:11:49.794 }, 00:11:49.794 { 00:11:49.794 "name": "BaseBdev2", 00:11:49.794 "uuid": "00541e87-a638-4b29-92ac-f543b4381559", 00:11:49.794 "is_configured": true, 00:11:49.794 "data_offset": 0, 00:11:49.794 "data_size": 65536 00:11:49.794 }, 00:11:49.794 { 00:11:49.794 "name": "BaseBdev3", 00:11:49.794 "uuid": "6e72bc30-976d-470b-b2ba-6fb61696a176", 00:11:49.794 "is_configured": true, 00:11:49.794 "data_offset": 0, 00:11:49.794 "data_size": 65536 00:11:49.794 } 00:11:49.794 ] 00:11:49.794 }' 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.794 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.052 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:50.052 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:50.052 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.052 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.052 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.052 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.052 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:50.052 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.052 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.052 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.052 [2024-11-15 10:40:20.606627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.311 "name": "Existed_Raid", 00:11:50.311 "aliases": [ 00:11:50.311 "d3ee1e8f-86cb-406d-a865-da6bd08ecdda" 00:11:50.311 ], 00:11:50.311 "product_name": "Raid Volume", 00:11:50.311 "block_size": 512, 00:11:50.311 "num_blocks": 65536, 00:11:50.311 "uuid": "d3ee1e8f-86cb-406d-a865-da6bd08ecdda", 00:11:50.311 "assigned_rate_limits": { 00:11:50.311 "rw_ios_per_sec": 0, 00:11:50.311 "rw_mbytes_per_sec": 0, 00:11:50.311 "r_mbytes_per_sec": 0, 00:11:50.311 "w_mbytes_per_sec": 0 00:11:50.311 }, 00:11:50.311 "claimed": false, 00:11:50.311 "zoned": false, 00:11:50.311 "supported_io_types": { 00:11:50.311 "read": true, 00:11:50.311 "write": true, 00:11:50.311 "unmap": false, 00:11:50.311 "flush": false, 00:11:50.311 "reset": true, 00:11:50.311 "nvme_admin": false, 00:11:50.311 "nvme_io": false, 00:11:50.311 "nvme_io_md": false, 00:11:50.311 "write_zeroes": true, 00:11:50.311 "zcopy": false, 00:11:50.311 "get_zone_info": false, 00:11:50.311 "zone_management": false, 00:11:50.311 "zone_append": false, 00:11:50.311 "compare": false, 00:11:50.311 "compare_and_write": false, 00:11:50.311 "abort": false, 00:11:50.311 "seek_hole": false, 00:11:50.311 "seek_data": false, 00:11:50.311 "copy": false, 00:11:50.311 "nvme_iov_md": false 00:11:50.311 }, 00:11:50.311 "memory_domains": [ 00:11:50.311 { 00:11:50.311 "dma_device_id": "system", 00:11:50.311 "dma_device_type": 1 00:11:50.311 }, 00:11:50.311 { 00:11:50.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.311 "dma_device_type": 2 00:11:50.311 }, 00:11:50.311 { 00:11:50.311 "dma_device_id": "system", 00:11:50.311 "dma_device_type": 1 00:11:50.311 }, 00:11:50.311 { 00:11:50.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.311 "dma_device_type": 2 00:11:50.311 }, 00:11:50.311 { 00:11:50.311 "dma_device_id": "system", 00:11:50.311 "dma_device_type": 1 00:11:50.311 }, 00:11:50.311 { 00:11:50.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.311 "dma_device_type": 2 00:11:50.311 } 00:11:50.311 ], 00:11:50.311 "driver_specific": { 00:11:50.311 "raid": { 00:11:50.311 "uuid": "d3ee1e8f-86cb-406d-a865-da6bd08ecdda", 00:11:50.311 "strip_size_kb": 0, 00:11:50.311 "state": "online", 00:11:50.311 "raid_level": "raid1", 00:11:50.311 "superblock": false, 00:11:50.311 "num_base_bdevs": 3, 00:11:50.311 "num_base_bdevs_discovered": 3, 00:11:50.311 "num_base_bdevs_operational": 3, 00:11:50.311 "base_bdevs_list": [ 00:11:50.311 { 00:11:50.311 "name": "BaseBdev1", 00:11:50.311 "uuid": "1a253111-aa77-46db-865a-3cd7b097d328", 00:11:50.311 "is_configured": true, 00:11:50.311 "data_offset": 0, 00:11:50.311 "data_size": 65536 00:11:50.311 }, 00:11:50.311 { 00:11:50.311 "name": "BaseBdev2", 00:11:50.311 "uuid": "00541e87-a638-4b29-92ac-f543b4381559", 00:11:50.311 "is_configured": true, 00:11:50.311 "data_offset": 0, 00:11:50.311 "data_size": 65536 00:11:50.311 }, 00:11:50.311 { 00:11:50.311 "name": "BaseBdev3", 00:11:50.311 "uuid": "6e72bc30-976d-470b-b2ba-6fb61696a176", 00:11:50.311 "is_configured": true, 00:11:50.311 "data_offset": 0, 00:11:50.311 "data_size": 65536 00:11:50.311 } 00:11:50.311 ] 00:11:50.311 } 00:11:50.311 } 00:11:50.311 }' 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:50.311 BaseBdev2 00:11:50.311 BaseBdev3' 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.311 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:50.312 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.312 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.312 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.312 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.570 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.570 [2024-11-15 10:40:20.934337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.570 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.571 "name": "Existed_Raid", 00:11:50.571 "uuid": "d3ee1e8f-86cb-406d-a865-da6bd08ecdda", 00:11:50.571 "strip_size_kb": 0, 00:11:50.571 "state": "online", 00:11:50.571 "raid_level": "raid1", 00:11:50.571 "superblock": false, 00:11:50.571 "num_base_bdevs": 3, 00:11:50.571 "num_base_bdevs_discovered": 2, 00:11:50.571 "num_base_bdevs_operational": 2, 00:11:50.571 "base_bdevs_list": [ 00:11:50.571 { 00:11:50.571 "name": null, 00:11:50.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.571 "is_configured": false, 00:11:50.571 "data_offset": 0, 00:11:50.571 "data_size": 65536 00:11:50.571 }, 00:11:50.571 { 00:11:50.571 "name": "BaseBdev2", 00:11:50.571 "uuid": "00541e87-a638-4b29-92ac-f543b4381559", 00:11:50.571 "is_configured": true, 00:11:50.571 "data_offset": 0, 00:11:50.571 "data_size": 65536 00:11:50.571 }, 00:11:50.571 { 00:11:50.571 "name": "BaseBdev3", 00:11:50.571 "uuid": "6e72bc30-976d-470b-b2ba-6fb61696a176", 00:11:50.571 "is_configured": true, 00:11:50.571 "data_offset": 0, 00:11:50.571 "data_size": 65536 00:11:50.571 } 00:11:50.571 ] 00:11:50.571 }' 00:11:50.571 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.571 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.138 [2024-11-15 10:40:21.513788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:51.138 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.139 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.139 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:51.139 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.139 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.139 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.139 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:51.139 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.139 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:51.139 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.139 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.139 [2024-11-15 10:40:21.649430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:51.139 [2024-11-15 10:40:21.649615] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.398 [2024-11-15 10:40:21.730384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.398 [2024-11-15 10:40:21.730448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.398 [2024-11-15 10:40:21.730468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.398 BaseBdev2 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.398 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.398 [ 00:11:51.398 { 00:11:51.398 "name": "BaseBdev2", 00:11:51.398 "aliases": [ 00:11:51.398 "0713e581-cf84-48d0-a61c-e909e1d507d2" 00:11:51.398 ], 00:11:51.398 "product_name": "Malloc disk", 00:11:51.398 "block_size": 512, 00:11:51.398 "num_blocks": 65536, 00:11:51.398 "uuid": "0713e581-cf84-48d0-a61c-e909e1d507d2", 00:11:51.398 "assigned_rate_limits": { 00:11:51.398 "rw_ios_per_sec": 0, 00:11:51.398 "rw_mbytes_per_sec": 0, 00:11:51.398 "r_mbytes_per_sec": 0, 00:11:51.398 "w_mbytes_per_sec": 0 00:11:51.398 }, 00:11:51.398 "claimed": false, 00:11:51.398 "zoned": false, 00:11:51.398 "supported_io_types": { 00:11:51.398 "read": true, 00:11:51.398 "write": true, 00:11:51.398 "unmap": true, 00:11:51.398 "flush": true, 00:11:51.398 "reset": true, 00:11:51.398 "nvme_admin": false, 00:11:51.398 "nvme_io": false, 00:11:51.398 "nvme_io_md": false, 00:11:51.398 "write_zeroes": true, 00:11:51.398 "zcopy": true, 00:11:51.398 "get_zone_info": false, 00:11:51.398 "zone_management": false, 00:11:51.398 "zone_append": false, 00:11:51.398 "compare": false, 00:11:51.398 "compare_and_write": false, 00:11:51.398 "abort": true, 00:11:51.398 "seek_hole": false, 00:11:51.398 "seek_data": false, 00:11:51.398 "copy": true, 00:11:51.398 "nvme_iov_md": false 00:11:51.398 }, 00:11:51.398 "memory_domains": [ 00:11:51.398 { 00:11:51.398 "dma_device_id": "system", 00:11:51.398 "dma_device_type": 1 00:11:51.399 }, 00:11:51.399 { 00:11:51.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.399 "dma_device_type": 2 00:11:51.399 } 00:11:51.399 ], 00:11:51.399 "driver_specific": {} 00:11:51.399 } 00:11:51.399 ] 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.399 BaseBdev3 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.399 [ 00:11:51.399 { 00:11:51.399 "name": "BaseBdev3", 00:11:51.399 "aliases": [ 00:11:51.399 "3bd502bc-fa61-40c7-9455-f8922129a3ba" 00:11:51.399 ], 00:11:51.399 "product_name": "Malloc disk", 00:11:51.399 "block_size": 512, 00:11:51.399 "num_blocks": 65536, 00:11:51.399 "uuid": "3bd502bc-fa61-40c7-9455-f8922129a3ba", 00:11:51.399 "assigned_rate_limits": { 00:11:51.399 "rw_ios_per_sec": 0, 00:11:51.399 "rw_mbytes_per_sec": 0, 00:11:51.399 "r_mbytes_per_sec": 0, 00:11:51.399 "w_mbytes_per_sec": 0 00:11:51.399 }, 00:11:51.399 "claimed": false, 00:11:51.399 "zoned": false, 00:11:51.399 "supported_io_types": { 00:11:51.399 "read": true, 00:11:51.399 "write": true, 00:11:51.399 "unmap": true, 00:11:51.399 "flush": true, 00:11:51.399 "reset": true, 00:11:51.399 "nvme_admin": false, 00:11:51.399 "nvme_io": false, 00:11:51.399 "nvme_io_md": false, 00:11:51.399 "write_zeroes": true, 00:11:51.399 "zcopy": true, 00:11:51.399 "get_zone_info": false, 00:11:51.399 "zone_management": false, 00:11:51.399 "zone_append": false, 00:11:51.399 "compare": false, 00:11:51.399 "compare_and_write": false, 00:11:51.399 "abort": true, 00:11:51.399 "seek_hole": false, 00:11:51.399 "seek_data": false, 00:11:51.399 "copy": true, 00:11:51.399 "nvme_iov_md": false 00:11:51.399 }, 00:11:51.399 "memory_domains": [ 00:11:51.399 { 00:11:51.399 "dma_device_id": "system", 00:11:51.399 "dma_device_type": 1 00:11:51.399 }, 00:11:51.399 { 00:11:51.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.399 "dma_device_type": 2 00:11:51.399 } 00:11:51.399 ], 00:11:51.399 "driver_specific": {} 00:11:51.399 } 00:11:51.399 ] 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.399 [2024-11-15 10:40:21.924722] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.399 [2024-11-15 10:40:21.924784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.399 [2024-11-15 10:40:21.924811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.399 [2024-11-15 10:40:21.926985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.399 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.657 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.657 "name": "Existed_Raid", 00:11:51.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.657 "strip_size_kb": 0, 00:11:51.657 "state": "configuring", 00:11:51.657 "raid_level": "raid1", 00:11:51.657 "superblock": false, 00:11:51.657 "num_base_bdevs": 3, 00:11:51.657 "num_base_bdevs_discovered": 2, 00:11:51.657 "num_base_bdevs_operational": 3, 00:11:51.657 "base_bdevs_list": [ 00:11:51.657 { 00:11:51.657 "name": "BaseBdev1", 00:11:51.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.657 "is_configured": false, 00:11:51.657 "data_offset": 0, 00:11:51.657 "data_size": 0 00:11:51.658 }, 00:11:51.658 { 00:11:51.658 "name": "BaseBdev2", 00:11:51.658 "uuid": "0713e581-cf84-48d0-a61c-e909e1d507d2", 00:11:51.658 "is_configured": true, 00:11:51.658 "data_offset": 0, 00:11:51.658 "data_size": 65536 00:11:51.658 }, 00:11:51.658 { 00:11:51.658 "name": "BaseBdev3", 00:11:51.658 "uuid": "3bd502bc-fa61-40c7-9455-f8922129a3ba", 00:11:51.658 "is_configured": true, 00:11:51.658 "data_offset": 0, 00:11:51.658 "data_size": 65536 00:11:51.658 } 00:11:51.658 ] 00:11:51.658 }' 00:11:51.658 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.658 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.917 [2024-11-15 10:40:22.428909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.917 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.175 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.175 "name": "Existed_Raid", 00:11:52.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.175 "strip_size_kb": 0, 00:11:52.175 "state": "configuring", 00:11:52.175 "raid_level": "raid1", 00:11:52.175 "superblock": false, 00:11:52.175 "num_base_bdevs": 3, 00:11:52.175 "num_base_bdevs_discovered": 1, 00:11:52.175 "num_base_bdevs_operational": 3, 00:11:52.175 "base_bdevs_list": [ 00:11:52.175 { 00:11:52.175 "name": "BaseBdev1", 00:11:52.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.175 "is_configured": false, 00:11:52.175 "data_offset": 0, 00:11:52.175 "data_size": 0 00:11:52.175 }, 00:11:52.175 { 00:11:52.175 "name": null, 00:11:52.175 "uuid": "0713e581-cf84-48d0-a61c-e909e1d507d2", 00:11:52.175 "is_configured": false, 00:11:52.175 "data_offset": 0, 00:11:52.175 "data_size": 65536 00:11:52.175 }, 00:11:52.175 { 00:11:52.175 "name": "BaseBdev3", 00:11:52.175 "uuid": "3bd502bc-fa61-40c7-9455-f8922129a3ba", 00:11:52.175 "is_configured": true, 00:11:52.175 "data_offset": 0, 00:11:52.175 "data_size": 65536 00:11:52.175 } 00:11:52.175 ] 00:11:52.175 }' 00:11:52.175 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.175 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.433 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.433 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.433 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.433 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.433 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.433 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:52.433 10:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:52.433 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.434 10:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.692 [2024-11-15 10:40:23.018303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.692 BaseBdev1 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.692 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.692 [ 00:11:52.692 { 00:11:52.692 "name": "BaseBdev1", 00:11:52.692 "aliases": [ 00:11:52.692 "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa" 00:11:52.692 ], 00:11:52.692 "product_name": "Malloc disk", 00:11:52.692 "block_size": 512, 00:11:52.692 "num_blocks": 65536, 00:11:52.692 "uuid": "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa", 00:11:52.692 "assigned_rate_limits": { 00:11:52.692 "rw_ios_per_sec": 0, 00:11:52.692 "rw_mbytes_per_sec": 0, 00:11:52.692 "r_mbytes_per_sec": 0, 00:11:52.693 "w_mbytes_per_sec": 0 00:11:52.693 }, 00:11:52.693 "claimed": true, 00:11:52.693 "claim_type": "exclusive_write", 00:11:52.693 "zoned": false, 00:11:52.693 "supported_io_types": { 00:11:52.693 "read": true, 00:11:52.693 "write": true, 00:11:52.693 "unmap": true, 00:11:52.693 "flush": true, 00:11:52.693 "reset": true, 00:11:52.693 "nvme_admin": false, 00:11:52.693 "nvme_io": false, 00:11:52.693 "nvme_io_md": false, 00:11:52.693 "write_zeroes": true, 00:11:52.693 "zcopy": true, 00:11:52.693 "get_zone_info": false, 00:11:52.693 "zone_management": false, 00:11:52.693 "zone_append": false, 00:11:52.693 "compare": false, 00:11:52.693 "compare_and_write": false, 00:11:52.693 "abort": true, 00:11:52.693 "seek_hole": false, 00:11:52.693 "seek_data": false, 00:11:52.693 "copy": true, 00:11:52.693 "nvme_iov_md": false 00:11:52.693 }, 00:11:52.693 "memory_domains": [ 00:11:52.693 { 00:11:52.693 "dma_device_id": "system", 00:11:52.693 "dma_device_type": 1 00:11:52.693 }, 00:11:52.693 { 00:11:52.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.693 "dma_device_type": 2 00:11:52.693 } 00:11:52.693 ], 00:11:52.693 "driver_specific": {} 00:11:52.693 } 00:11:52.693 ] 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.693 "name": "Existed_Raid", 00:11:52.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.693 "strip_size_kb": 0, 00:11:52.693 "state": "configuring", 00:11:52.693 "raid_level": "raid1", 00:11:52.693 "superblock": false, 00:11:52.693 "num_base_bdevs": 3, 00:11:52.693 "num_base_bdevs_discovered": 2, 00:11:52.693 "num_base_bdevs_operational": 3, 00:11:52.693 "base_bdevs_list": [ 00:11:52.693 { 00:11:52.693 "name": "BaseBdev1", 00:11:52.693 "uuid": "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa", 00:11:52.693 "is_configured": true, 00:11:52.693 "data_offset": 0, 00:11:52.693 "data_size": 65536 00:11:52.693 }, 00:11:52.693 { 00:11:52.693 "name": null, 00:11:52.693 "uuid": "0713e581-cf84-48d0-a61c-e909e1d507d2", 00:11:52.693 "is_configured": false, 00:11:52.693 "data_offset": 0, 00:11:52.693 "data_size": 65536 00:11:52.693 }, 00:11:52.693 { 00:11:52.693 "name": "BaseBdev3", 00:11:52.693 "uuid": "3bd502bc-fa61-40c7-9455-f8922129a3ba", 00:11:52.693 "is_configured": true, 00:11:52.693 "data_offset": 0, 00:11:52.693 "data_size": 65536 00:11:52.693 } 00:11:52.693 ] 00:11:52.693 }' 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.693 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.331 [2024-11-15 10:40:23.646522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.331 "name": "Existed_Raid", 00:11:53.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.331 "strip_size_kb": 0, 00:11:53.331 "state": "configuring", 00:11:53.331 "raid_level": "raid1", 00:11:53.331 "superblock": false, 00:11:53.331 "num_base_bdevs": 3, 00:11:53.331 "num_base_bdevs_discovered": 1, 00:11:53.331 "num_base_bdevs_operational": 3, 00:11:53.331 "base_bdevs_list": [ 00:11:53.331 { 00:11:53.331 "name": "BaseBdev1", 00:11:53.331 "uuid": "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa", 00:11:53.331 "is_configured": true, 00:11:53.331 "data_offset": 0, 00:11:53.331 "data_size": 65536 00:11:53.331 }, 00:11:53.331 { 00:11:53.331 "name": null, 00:11:53.331 "uuid": "0713e581-cf84-48d0-a61c-e909e1d507d2", 00:11:53.331 "is_configured": false, 00:11:53.331 "data_offset": 0, 00:11:53.331 "data_size": 65536 00:11:53.331 }, 00:11:53.331 { 00:11:53.331 "name": null, 00:11:53.331 "uuid": "3bd502bc-fa61-40c7-9455-f8922129a3ba", 00:11:53.331 "is_configured": false, 00:11:53.331 "data_offset": 0, 00:11:53.331 "data_size": 65536 00:11:53.331 } 00:11:53.331 ] 00:11:53.331 }' 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.331 10:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.587 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.587 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.587 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:53.587 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.846 [2024-11-15 10:40:24.190725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.846 "name": "Existed_Raid", 00:11:53.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.846 "strip_size_kb": 0, 00:11:53.846 "state": "configuring", 00:11:53.846 "raid_level": "raid1", 00:11:53.846 "superblock": false, 00:11:53.846 "num_base_bdevs": 3, 00:11:53.846 "num_base_bdevs_discovered": 2, 00:11:53.846 "num_base_bdevs_operational": 3, 00:11:53.846 "base_bdevs_list": [ 00:11:53.846 { 00:11:53.846 "name": "BaseBdev1", 00:11:53.846 "uuid": "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa", 00:11:53.846 "is_configured": true, 00:11:53.846 "data_offset": 0, 00:11:53.846 "data_size": 65536 00:11:53.846 }, 00:11:53.846 { 00:11:53.846 "name": null, 00:11:53.846 "uuid": "0713e581-cf84-48d0-a61c-e909e1d507d2", 00:11:53.846 "is_configured": false, 00:11:53.846 "data_offset": 0, 00:11:53.846 "data_size": 65536 00:11:53.846 }, 00:11:53.846 { 00:11:53.846 "name": "BaseBdev3", 00:11:53.846 "uuid": "3bd502bc-fa61-40c7-9455-f8922129a3ba", 00:11:53.846 "is_configured": true, 00:11:53.846 "data_offset": 0, 00:11:53.846 "data_size": 65536 00:11:53.846 } 00:11:53.846 ] 00:11:53.846 }' 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.846 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.412 [2024-11-15 10:40:24.730862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.412 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.412 "name": "Existed_Raid", 00:11:54.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.412 "strip_size_kb": 0, 00:11:54.412 "state": "configuring", 00:11:54.412 "raid_level": "raid1", 00:11:54.412 "superblock": false, 00:11:54.412 "num_base_bdevs": 3, 00:11:54.412 "num_base_bdevs_discovered": 1, 00:11:54.412 "num_base_bdevs_operational": 3, 00:11:54.412 "base_bdevs_list": [ 00:11:54.412 { 00:11:54.412 "name": null, 00:11:54.412 "uuid": "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa", 00:11:54.412 "is_configured": false, 00:11:54.412 "data_offset": 0, 00:11:54.412 "data_size": 65536 00:11:54.412 }, 00:11:54.412 { 00:11:54.412 "name": null, 00:11:54.412 "uuid": "0713e581-cf84-48d0-a61c-e909e1d507d2", 00:11:54.412 "is_configured": false, 00:11:54.412 "data_offset": 0, 00:11:54.412 "data_size": 65536 00:11:54.412 }, 00:11:54.412 { 00:11:54.413 "name": "BaseBdev3", 00:11:54.413 "uuid": "3bd502bc-fa61-40c7-9455-f8922129a3ba", 00:11:54.413 "is_configured": true, 00:11:54.413 "data_offset": 0, 00:11:54.413 "data_size": 65536 00:11:54.413 } 00:11:54.413 ] 00:11:54.413 }' 00:11:54.413 10:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.413 10:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.978 [2024-11-15 10:40:25.382635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.978 "name": "Existed_Raid", 00:11:54.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.978 "strip_size_kb": 0, 00:11:54.978 "state": "configuring", 00:11:54.978 "raid_level": "raid1", 00:11:54.978 "superblock": false, 00:11:54.978 "num_base_bdevs": 3, 00:11:54.978 "num_base_bdevs_discovered": 2, 00:11:54.978 "num_base_bdevs_operational": 3, 00:11:54.978 "base_bdevs_list": [ 00:11:54.978 { 00:11:54.978 "name": null, 00:11:54.978 "uuid": "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa", 00:11:54.978 "is_configured": false, 00:11:54.978 "data_offset": 0, 00:11:54.978 "data_size": 65536 00:11:54.978 }, 00:11:54.978 { 00:11:54.978 "name": "BaseBdev2", 00:11:54.978 "uuid": "0713e581-cf84-48d0-a61c-e909e1d507d2", 00:11:54.978 "is_configured": true, 00:11:54.978 "data_offset": 0, 00:11:54.978 "data_size": 65536 00:11:54.978 }, 00:11:54.978 { 00:11:54.978 "name": "BaseBdev3", 00:11:54.978 "uuid": "3bd502bc-fa61-40c7-9455-f8922129a3ba", 00:11:54.978 "is_configured": true, 00:11:54.978 "data_offset": 0, 00:11:54.978 "data_size": 65536 00:11:54.978 } 00:11:54.978 ] 00:11:54.978 }' 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.978 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.547 10:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.547 [2024-11-15 10:40:26.016477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:55.547 [2024-11-15 10:40:26.016543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:55.547 [2024-11-15 10:40:26.016556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:55.547 [2024-11-15 10:40:26.016861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:55.547 [2024-11-15 10:40:26.017044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:55.547 [2024-11-15 10:40:26.017075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:55.547 [2024-11-15 10:40:26.017393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.547 NewBaseBdev 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.547 [ 00:11:55.547 { 00:11:55.547 "name": "NewBaseBdev", 00:11:55.547 "aliases": [ 00:11:55.547 "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa" 00:11:55.547 ], 00:11:55.547 "product_name": "Malloc disk", 00:11:55.547 "block_size": 512, 00:11:55.547 "num_blocks": 65536, 00:11:55.547 "uuid": "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa", 00:11:55.547 "assigned_rate_limits": { 00:11:55.547 "rw_ios_per_sec": 0, 00:11:55.547 "rw_mbytes_per_sec": 0, 00:11:55.547 "r_mbytes_per_sec": 0, 00:11:55.547 "w_mbytes_per_sec": 0 00:11:55.547 }, 00:11:55.547 "claimed": true, 00:11:55.547 "claim_type": "exclusive_write", 00:11:55.547 "zoned": false, 00:11:55.547 "supported_io_types": { 00:11:55.547 "read": true, 00:11:55.547 "write": true, 00:11:55.547 "unmap": true, 00:11:55.547 "flush": true, 00:11:55.547 "reset": true, 00:11:55.547 "nvme_admin": false, 00:11:55.547 "nvme_io": false, 00:11:55.547 "nvme_io_md": false, 00:11:55.547 "write_zeroes": true, 00:11:55.547 "zcopy": true, 00:11:55.547 "get_zone_info": false, 00:11:55.547 "zone_management": false, 00:11:55.547 "zone_append": false, 00:11:55.547 "compare": false, 00:11:55.547 "compare_and_write": false, 00:11:55.547 "abort": true, 00:11:55.547 "seek_hole": false, 00:11:55.547 "seek_data": false, 00:11:55.547 "copy": true, 00:11:55.547 "nvme_iov_md": false 00:11:55.547 }, 00:11:55.547 "memory_domains": [ 00:11:55.547 { 00:11:55.547 "dma_device_id": "system", 00:11:55.547 "dma_device_type": 1 00:11:55.547 }, 00:11:55.547 { 00:11:55.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.547 "dma_device_type": 2 00:11:55.547 } 00:11:55.547 ], 00:11:55.547 "driver_specific": {} 00:11:55.547 } 00:11:55.547 ] 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.547 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.547 "name": "Existed_Raid", 00:11:55.547 "uuid": "e568293d-05a0-438c-9da4-dde7d3a0c59d", 00:11:55.547 "strip_size_kb": 0, 00:11:55.547 "state": "online", 00:11:55.547 "raid_level": "raid1", 00:11:55.547 "superblock": false, 00:11:55.547 "num_base_bdevs": 3, 00:11:55.547 "num_base_bdevs_discovered": 3, 00:11:55.547 "num_base_bdevs_operational": 3, 00:11:55.547 "base_bdevs_list": [ 00:11:55.547 { 00:11:55.547 "name": "NewBaseBdev", 00:11:55.547 "uuid": "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa", 00:11:55.547 "is_configured": true, 00:11:55.547 "data_offset": 0, 00:11:55.547 "data_size": 65536 00:11:55.547 }, 00:11:55.547 { 00:11:55.547 "name": "BaseBdev2", 00:11:55.547 "uuid": "0713e581-cf84-48d0-a61c-e909e1d507d2", 00:11:55.547 "is_configured": true, 00:11:55.547 "data_offset": 0, 00:11:55.547 "data_size": 65536 00:11:55.547 }, 00:11:55.547 { 00:11:55.548 "name": "BaseBdev3", 00:11:55.548 "uuid": "3bd502bc-fa61-40c7-9455-f8922129a3ba", 00:11:55.548 "is_configured": true, 00:11:55.548 "data_offset": 0, 00:11:55.548 "data_size": 65536 00:11:55.548 } 00:11:55.548 ] 00:11:55.548 }' 00:11:55.548 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.548 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.116 [2024-11-15 10:40:26.537045] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.116 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.116 "name": "Existed_Raid", 00:11:56.116 "aliases": [ 00:11:56.116 "e568293d-05a0-438c-9da4-dde7d3a0c59d" 00:11:56.116 ], 00:11:56.116 "product_name": "Raid Volume", 00:11:56.116 "block_size": 512, 00:11:56.116 "num_blocks": 65536, 00:11:56.116 "uuid": "e568293d-05a0-438c-9da4-dde7d3a0c59d", 00:11:56.116 "assigned_rate_limits": { 00:11:56.116 "rw_ios_per_sec": 0, 00:11:56.116 "rw_mbytes_per_sec": 0, 00:11:56.116 "r_mbytes_per_sec": 0, 00:11:56.116 "w_mbytes_per_sec": 0 00:11:56.116 }, 00:11:56.116 "claimed": false, 00:11:56.116 "zoned": false, 00:11:56.116 "supported_io_types": { 00:11:56.116 "read": true, 00:11:56.116 "write": true, 00:11:56.116 "unmap": false, 00:11:56.116 "flush": false, 00:11:56.116 "reset": true, 00:11:56.116 "nvme_admin": false, 00:11:56.116 "nvme_io": false, 00:11:56.116 "nvme_io_md": false, 00:11:56.116 "write_zeroes": true, 00:11:56.116 "zcopy": false, 00:11:56.116 "get_zone_info": false, 00:11:56.116 "zone_management": false, 00:11:56.116 "zone_append": false, 00:11:56.116 "compare": false, 00:11:56.116 "compare_and_write": false, 00:11:56.116 "abort": false, 00:11:56.116 "seek_hole": false, 00:11:56.116 "seek_data": false, 00:11:56.116 "copy": false, 00:11:56.116 "nvme_iov_md": false 00:11:56.116 }, 00:11:56.116 "memory_domains": [ 00:11:56.116 { 00:11:56.116 "dma_device_id": "system", 00:11:56.116 "dma_device_type": 1 00:11:56.116 }, 00:11:56.116 { 00:11:56.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.116 "dma_device_type": 2 00:11:56.116 }, 00:11:56.116 { 00:11:56.116 "dma_device_id": "system", 00:11:56.116 "dma_device_type": 1 00:11:56.116 }, 00:11:56.116 { 00:11:56.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.116 "dma_device_type": 2 00:11:56.116 }, 00:11:56.116 { 00:11:56.116 "dma_device_id": "system", 00:11:56.116 "dma_device_type": 1 00:11:56.116 }, 00:11:56.116 { 00:11:56.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.116 "dma_device_type": 2 00:11:56.116 } 00:11:56.116 ], 00:11:56.116 "driver_specific": { 00:11:56.116 "raid": { 00:11:56.116 "uuid": "e568293d-05a0-438c-9da4-dde7d3a0c59d", 00:11:56.116 "strip_size_kb": 0, 00:11:56.116 "state": "online", 00:11:56.116 "raid_level": "raid1", 00:11:56.116 "superblock": false, 00:11:56.116 "num_base_bdevs": 3, 00:11:56.116 "num_base_bdevs_discovered": 3, 00:11:56.116 "num_base_bdevs_operational": 3, 00:11:56.116 "base_bdevs_list": [ 00:11:56.116 { 00:11:56.116 "name": "NewBaseBdev", 00:11:56.116 "uuid": "3a3cd0a3-443f-4c17-aa26-4c6bacdcc8aa", 00:11:56.116 "is_configured": true, 00:11:56.116 "data_offset": 0, 00:11:56.116 "data_size": 65536 00:11:56.116 }, 00:11:56.116 { 00:11:56.116 "name": "BaseBdev2", 00:11:56.117 "uuid": "0713e581-cf84-48d0-a61c-e909e1d507d2", 00:11:56.117 "is_configured": true, 00:11:56.117 "data_offset": 0, 00:11:56.117 "data_size": 65536 00:11:56.117 }, 00:11:56.117 { 00:11:56.117 "name": "BaseBdev3", 00:11:56.117 "uuid": "3bd502bc-fa61-40c7-9455-f8922129a3ba", 00:11:56.117 "is_configured": true, 00:11:56.117 "data_offset": 0, 00:11:56.117 "data_size": 65536 00:11:56.117 } 00:11:56.117 ] 00:11:56.117 } 00:11:56.117 } 00:11:56.117 }' 00:11:56.117 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.117 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:56.117 BaseBdev2 00:11:56.117 BaseBdev3' 00:11:56.117 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.117 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.117 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.117 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:56.117 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.117 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.117 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.378 [2024-11-15 10:40:26.804733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.378 [2024-11-15 10:40:26.804775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.378 [2024-11-15 10:40:26.804865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.378 [2024-11-15 10:40:26.805225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.378 [2024-11-15 10:40:26.805253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67645 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67645 ']' 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67645 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67645 00:11:56.378 killing process with pid 67645 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67645' 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67645 00:11:56.378 [2024-11-15 10:40:26.842058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.378 10:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67645 00:11:56.637 [2024-11-15 10:40:27.094282] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.572 ************************************ 00:11:57.572 END TEST raid_state_function_test 00:11:57.572 ************************************ 00:11:57.572 10:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:57.572 00:11:57.572 real 0m11.320s 00:11:57.572 user 0m18.957s 00:11:57.572 sys 0m1.378s 00:11:57.572 10:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.572 10:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.572 10:40:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:57.572 10:40:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:57.572 10:40:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.572 10:40:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.830 ************************************ 00:11:57.830 START TEST raid_state_function_test_sb 00:11:57.830 ************************************ 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68277 00:11:57.830 Process raid pid: 68277 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68277' 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68277 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68277 ']' 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:57.830 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.830 [2024-11-15 10:40:28.225993] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:11:57.830 [2024-11-15 10:40:28.226726] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.089 [2024-11-15 10:40:28.405705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.089 [2024-11-15 10:40:28.509146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.346 [2024-11-15 10:40:28.693671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.346 [2024-11-15 10:40:28.693727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.914 [2024-11-15 10:40:29.211941] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.914 [2024-11-15 10:40:29.212009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.914 [2024-11-15 10:40:29.212027] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:58.914 [2024-11-15 10:40:29.212044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:58.914 [2024-11-15 10:40:29.212054] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:58.914 [2024-11-15 10:40:29.212069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.914 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.914 "name": "Existed_Raid", 00:11:58.914 "uuid": "ad0fb4c6-8c1d-4605-b43e-7d01ac28da66", 00:11:58.914 "strip_size_kb": 0, 00:11:58.914 "state": "configuring", 00:11:58.914 "raid_level": "raid1", 00:11:58.914 "superblock": true, 00:11:58.914 "num_base_bdevs": 3, 00:11:58.914 "num_base_bdevs_discovered": 0, 00:11:58.914 "num_base_bdevs_operational": 3, 00:11:58.914 "base_bdevs_list": [ 00:11:58.914 { 00:11:58.914 "name": "BaseBdev1", 00:11:58.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.914 "is_configured": false, 00:11:58.915 "data_offset": 0, 00:11:58.915 "data_size": 0 00:11:58.915 }, 00:11:58.915 { 00:11:58.915 "name": "BaseBdev2", 00:11:58.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.915 "is_configured": false, 00:11:58.915 "data_offset": 0, 00:11:58.915 "data_size": 0 00:11:58.915 }, 00:11:58.915 { 00:11:58.915 "name": "BaseBdev3", 00:11:58.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.915 "is_configured": false, 00:11:58.915 "data_offset": 0, 00:11:58.915 "data_size": 0 00:11:58.915 } 00:11:58.915 ] 00:11:58.915 }' 00:11:58.915 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.915 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.173 [2024-11-15 10:40:29.716126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:59.173 [2024-11-15 10:40:29.716174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.173 [2024-11-15 10:40:29.724130] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.173 [2024-11-15 10:40:29.724185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.173 [2024-11-15 10:40:29.724201] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:59.173 [2024-11-15 10:40:29.724218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:59.173 [2024-11-15 10:40:29.724228] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:59.173 [2024-11-15 10:40:29.724243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.173 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.431 [2024-11-15 10:40:29.764577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.431 BaseBdev1 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.432 [ 00:11:59.432 { 00:11:59.432 "name": "BaseBdev1", 00:11:59.432 "aliases": [ 00:11:59.432 "c9b88d1a-2ab3-476a-8c1f-ad3898aef943" 00:11:59.432 ], 00:11:59.432 "product_name": "Malloc disk", 00:11:59.432 "block_size": 512, 00:11:59.432 "num_blocks": 65536, 00:11:59.432 "uuid": "c9b88d1a-2ab3-476a-8c1f-ad3898aef943", 00:11:59.432 "assigned_rate_limits": { 00:11:59.432 "rw_ios_per_sec": 0, 00:11:59.432 "rw_mbytes_per_sec": 0, 00:11:59.432 "r_mbytes_per_sec": 0, 00:11:59.432 "w_mbytes_per_sec": 0 00:11:59.432 }, 00:11:59.432 "claimed": true, 00:11:59.432 "claim_type": "exclusive_write", 00:11:59.432 "zoned": false, 00:11:59.432 "supported_io_types": { 00:11:59.432 "read": true, 00:11:59.432 "write": true, 00:11:59.432 "unmap": true, 00:11:59.432 "flush": true, 00:11:59.432 "reset": true, 00:11:59.432 "nvme_admin": false, 00:11:59.432 "nvme_io": false, 00:11:59.432 "nvme_io_md": false, 00:11:59.432 "write_zeroes": true, 00:11:59.432 "zcopy": true, 00:11:59.432 "get_zone_info": false, 00:11:59.432 "zone_management": false, 00:11:59.432 "zone_append": false, 00:11:59.432 "compare": false, 00:11:59.432 "compare_and_write": false, 00:11:59.432 "abort": true, 00:11:59.432 "seek_hole": false, 00:11:59.432 "seek_data": false, 00:11:59.432 "copy": true, 00:11:59.432 "nvme_iov_md": false 00:11:59.432 }, 00:11:59.432 "memory_domains": [ 00:11:59.432 { 00:11:59.432 "dma_device_id": "system", 00:11:59.432 "dma_device_type": 1 00:11:59.432 }, 00:11:59.432 { 00:11:59.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.432 "dma_device_type": 2 00:11:59.432 } 00:11:59.432 ], 00:11:59.432 "driver_specific": {} 00:11:59.432 } 00:11:59.432 ] 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.432 "name": "Existed_Raid", 00:11:59.432 "uuid": "01015c49-a734-44a9-9a07-a1f016a82b92", 00:11:59.432 "strip_size_kb": 0, 00:11:59.432 "state": "configuring", 00:11:59.432 "raid_level": "raid1", 00:11:59.432 "superblock": true, 00:11:59.432 "num_base_bdevs": 3, 00:11:59.432 "num_base_bdevs_discovered": 1, 00:11:59.432 "num_base_bdevs_operational": 3, 00:11:59.432 "base_bdevs_list": [ 00:11:59.432 { 00:11:59.432 "name": "BaseBdev1", 00:11:59.432 "uuid": "c9b88d1a-2ab3-476a-8c1f-ad3898aef943", 00:11:59.432 "is_configured": true, 00:11:59.432 "data_offset": 2048, 00:11:59.432 "data_size": 63488 00:11:59.432 }, 00:11:59.432 { 00:11:59.432 "name": "BaseBdev2", 00:11:59.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.432 "is_configured": false, 00:11:59.432 "data_offset": 0, 00:11:59.432 "data_size": 0 00:11:59.432 }, 00:11:59.432 { 00:11:59.432 "name": "BaseBdev3", 00:11:59.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.432 "is_configured": false, 00:11:59.432 "data_offset": 0, 00:11:59.432 "data_size": 0 00:11:59.432 } 00:11:59.432 ] 00:11:59.432 }' 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.432 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.997 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:59.997 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.997 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.997 [2024-11-15 10:40:30.304774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:59.997 [2024-11-15 10:40:30.304840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:59.997 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.997 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:59.997 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.997 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.997 [2024-11-15 10:40:30.312820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.997 [2024-11-15 10:40:30.315065] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:59.997 [2024-11-15 10:40:30.315121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:59.997 [2024-11-15 10:40:30.315138] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:59.997 [2024-11-15 10:40:30.315154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:59.997 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.997 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:59.997 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.998 "name": "Existed_Raid", 00:11:59.998 "uuid": "d9058197-05c9-4563-b710-1d8c80794b58", 00:11:59.998 "strip_size_kb": 0, 00:11:59.998 "state": "configuring", 00:11:59.998 "raid_level": "raid1", 00:11:59.998 "superblock": true, 00:11:59.998 "num_base_bdevs": 3, 00:11:59.998 "num_base_bdevs_discovered": 1, 00:11:59.998 "num_base_bdevs_operational": 3, 00:11:59.998 "base_bdevs_list": [ 00:11:59.998 { 00:11:59.998 "name": "BaseBdev1", 00:11:59.998 "uuid": "c9b88d1a-2ab3-476a-8c1f-ad3898aef943", 00:11:59.998 "is_configured": true, 00:11:59.998 "data_offset": 2048, 00:11:59.998 "data_size": 63488 00:11:59.998 }, 00:11:59.998 { 00:11:59.998 "name": "BaseBdev2", 00:11:59.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.998 "is_configured": false, 00:11:59.998 "data_offset": 0, 00:11:59.998 "data_size": 0 00:11:59.998 }, 00:11:59.998 { 00:11:59.998 "name": "BaseBdev3", 00:11:59.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.998 "is_configured": false, 00:11:59.998 "data_offset": 0, 00:11:59.998 "data_size": 0 00:11:59.998 } 00:11:59.998 ] 00:11:59.998 }' 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.998 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.644 [2024-11-15 10:40:30.870931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.644 BaseBdev2 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.644 [ 00:12:00.644 { 00:12:00.644 "name": "BaseBdev2", 00:12:00.644 "aliases": [ 00:12:00.644 "8adc4d31-5948-49c2-8c68-6f940574f410" 00:12:00.644 ], 00:12:00.644 "product_name": "Malloc disk", 00:12:00.644 "block_size": 512, 00:12:00.644 "num_blocks": 65536, 00:12:00.644 "uuid": "8adc4d31-5948-49c2-8c68-6f940574f410", 00:12:00.644 "assigned_rate_limits": { 00:12:00.644 "rw_ios_per_sec": 0, 00:12:00.644 "rw_mbytes_per_sec": 0, 00:12:00.644 "r_mbytes_per_sec": 0, 00:12:00.644 "w_mbytes_per_sec": 0 00:12:00.644 }, 00:12:00.644 "claimed": true, 00:12:00.644 "claim_type": "exclusive_write", 00:12:00.644 "zoned": false, 00:12:00.644 "supported_io_types": { 00:12:00.644 "read": true, 00:12:00.644 "write": true, 00:12:00.644 "unmap": true, 00:12:00.644 "flush": true, 00:12:00.644 "reset": true, 00:12:00.644 "nvme_admin": false, 00:12:00.644 "nvme_io": false, 00:12:00.644 "nvme_io_md": false, 00:12:00.644 "write_zeroes": true, 00:12:00.644 "zcopy": true, 00:12:00.644 "get_zone_info": false, 00:12:00.644 "zone_management": false, 00:12:00.644 "zone_append": false, 00:12:00.644 "compare": false, 00:12:00.644 "compare_and_write": false, 00:12:00.644 "abort": true, 00:12:00.644 "seek_hole": false, 00:12:00.644 "seek_data": false, 00:12:00.644 "copy": true, 00:12:00.644 "nvme_iov_md": false 00:12:00.644 }, 00:12:00.644 "memory_domains": [ 00:12:00.644 { 00:12:00.644 "dma_device_id": "system", 00:12:00.644 "dma_device_type": 1 00:12:00.644 }, 00:12:00.644 { 00:12:00.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.644 "dma_device_type": 2 00:12:00.644 } 00:12:00.644 ], 00:12:00.644 "driver_specific": {} 00:12:00.644 } 00:12:00.644 ] 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.644 "name": "Existed_Raid", 00:12:00.644 "uuid": "d9058197-05c9-4563-b710-1d8c80794b58", 00:12:00.644 "strip_size_kb": 0, 00:12:00.644 "state": "configuring", 00:12:00.644 "raid_level": "raid1", 00:12:00.644 "superblock": true, 00:12:00.644 "num_base_bdevs": 3, 00:12:00.644 "num_base_bdevs_discovered": 2, 00:12:00.644 "num_base_bdevs_operational": 3, 00:12:00.644 "base_bdevs_list": [ 00:12:00.644 { 00:12:00.644 "name": "BaseBdev1", 00:12:00.644 "uuid": "c9b88d1a-2ab3-476a-8c1f-ad3898aef943", 00:12:00.644 "is_configured": true, 00:12:00.644 "data_offset": 2048, 00:12:00.644 "data_size": 63488 00:12:00.644 }, 00:12:00.644 { 00:12:00.644 "name": "BaseBdev2", 00:12:00.644 "uuid": "8adc4d31-5948-49c2-8c68-6f940574f410", 00:12:00.644 "is_configured": true, 00:12:00.644 "data_offset": 2048, 00:12:00.644 "data_size": 63488 00:12:00.644 }, 00:12:00.644 { 00:12:00.644 "name": "BaseBdev3", 00:12:00.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.644 "is_configured": false, 00:12:00.644 "data_offset": 0, 00:12:00.644 "data_size": 0 00:12:00.644 } 00:12:00.644 ] 00:12:00.644 }' 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.644 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.918 [2024-11-15 10:40:31.470264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.918 [2024-11-15 10:40:31.470606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:00.918 [2024-11-15 10:40:31.470646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:00.918 [2024-11-15 10:40:31.470985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:00.918 BaseBdev3 00:12:00.918 [2024-11-15 10:40:31.471213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:00.918 [2024-11-15 10:40:31.471241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:00.918 [2024-11-15 10:40:31.471451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.918 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.178 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.178 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:01.178 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.178 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.178 [ 00:12:01.178 { 00:12:01.178 "name": "BaseBdev3", 00:12:01.178 "aliases": [ 00:12:01.178 "19d4b129-9a82-4f78-94cf-86993bcda3ad" 00:12:01.178 ], 00:12:01.178 "product_name": "Malloc disk", 00:12:01.178 "block_size": 512, 00:12:01.178 "num_blocks": 65536, 00:12:01.178 "uuid": "19d4b129-9a82-4f78-94cf-86993bcda3ad", 00:12:01.178 "assigned_rate_limits": { 00:12:01.178 "rw_ios_per_sec": 0, 00:12:01.178 "rw_mbytes_per_sec": 0, 00:12:01.178 "r_mbytes_per_sec": 0, 00:12:01.178 "w_mbytes_per_sec": 0 00:12:01.178 }, 00:12:01.178 "claimed": true, 00:12:01.178 "claim_type": "exclusive_write", 00:12:01.178 "zoned": false, 00:12:01.178 "supported_io_types": { 00:12:01.178 "read": true, 00:12:01.178 "write": true, 00:12:01.178 "unmap": true, 00:12:01.178 "flush": true, 00:12:01.178 "reset": true, 00:12:01.178 "nvme_admin": false, 00:12:01.178 "nvme_io": false, 00:12:01.178 "nvme_io_md": false, 00:12:01.178 "write_zeroes": true, 00:12:01.178 "zcopy": true, 00:12:01.178 "get_zone_info": false, 00:12:01.178 "zone_management": false, 00:12:01.178 "zone_append": false, 00:12:01.178 "compare": false, 00:12:01.178 "compare_and_write": false, 00:12:01.178 "abort": true, 00:12:01.178 "seek_hole": false, 00:12:01.179 "seek_data": false, 00:12:01.179 "copy": true, 00:12:01.179 "nvme_iov_md": false 00:12:01.179 }, 00:12:01.179 "memory_domains": [ 00:12:01.179 { 00:12:01.179 "dma_device_id": "system", 00:12:01.179 "dma_device_type": 1 00:12:01.179 }, 00:12:01.179 { 00:12:01.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.179 "dma_device_type": 2 00:12:01.179 } 00:12:01.179 ], 00:12:01.179 "driver_specific": {} 00:12:01.179 } 00:12:01.179 ] 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.179 "name": "Existed_Raid", 00:12:01.179 "uuid": "d9058197-05c9-4563-b710-1d8c80794b58", 00:12:01.179 "strip_size_kb": 0, 00:12:01.179 "state": "online", 00:12:01.179 "raid_level": "raid1", 00:12:01.179 "superblock": true, 00:12:01.179 "num_base_bdevs": 3, 00:12:01.179 "num_base_bdevs_discovered": 3, 00:12:01.179 "num_base_bdevs_operational": 3, 00:12:01.179 "base_bdevs_list": [ 00:12:01.179 { 00:12:01.179 "name": "BaseBdev1", 00:12:01.179 "uuid": "c9b88d1a-2ab3-476a-8c1f-ad3898aef943", 00:12:01.179 "is_configured": true, 00:12:01.179 "data_offset": 2048, 00:12:01.179 "data_size": 63488 00:12:01.179 }, 00:12:01.179 { 00:12:01.179 "name": "BaseBdev2", 00:12:01.179 "uuid": "8adc4d31-5948-49c2-8c68-6f940574f410", 00:12:01.179 "is_configured": true, 00:12:01.179 "data_offset": 2048, 00:12:01.179 "data_size": 63488 00:12:01.179 }, 00:12:01.179 { 00:12:01.179 "name": "BaseBdev3", 00:12:01.179 "uuid": "19d4b129-9a82-4f78-94cf-86993bcda3ad", 00:12:01.179 "is_configured": true, 00:12:01.179 "data_offset": 2048, 00:12:01.179 "data_size": 63488 00:12:01.179 } 00:12:01.179 ] 00:12:01.179 }' 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.179 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.438 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:01.438 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:01.438 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:01.438 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:01.438 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:01.438 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:01.438 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:01.438 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:01.438 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.438 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.438 [2024-11-15 10:40:31.986863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:01.697 "name": "Existed_Raid", 00:12:01.697 "aliases": [ 00:12:01.697 "d9058197-05c9-4563-b710-1d8c80794b58" 00:12:01.697 ], 00:12:01.697 "product_name": "Raid Volume", 00:12:01.697 "block_size": 512, 00:12:01.697 "num_blocks": 63488, 00:12:01.697 "uuid": "d9058197-05c9-4563-b710-1d8c80794b58", 00:12:01.697 "assigned_rate_limits": { 00:12:01.697 "rw_ios_per_sec": 0, 00:12:01.697 "rw_mbytes_per_sec": 0, 00:12:01.697 "r_mbytes_per_sec": 0, 00:12:01.697 "w_mbytes_per_sec": 0 00:12:01.697 }, 00:12:01.697 "claimed": false, 00:12:01.697 "zoned": false, 00:12:01.697 "supported_io_types": { 00:12:01.697 "read": true, 00:12:01.697 "write": true, 00:12:01.697 "unmap": false, 00:12:01.697 "flush": false, 00:12:01.697 "reset": true, 00:12:01.697 "nvme_admin": false, 00:12:01.697 "nvme_io": false, 00:12:01.697 "nvme_io_md": false, 00:12:01.697 "write_zeroes": true, 00:12:01.697 "zcopy": false, 00:12:01.697 "get_zone_info": false, 00:12:01.697 "zone_management": false, 00:12:01.697 "zone_append": false, 00:12:01.697 "compare": false, 00:12:01.697 "compare_and_write": false, 00:12:01.697 "abort": false, 00:12:01.697 "seek_hole": false, 00:12:01.697 "seek_data": false, 00:12:01.697 "copy": false, 00:12:01.697 "nvme_iov_md": false 00:12:01.697 }, 00:12:01.697 "memory_domains": [ 00:12:01.697 { 00:12:01.697 "dma_device_id": "system", 00:12:01.697 "dma_device_type": 1 00:12:01.697 }, 00:12:01.697 { 00:12:01.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.697 "dma_device_type": 2 00:12:01.697 }, 00:12:01.697 { 00:12:01.697 "dma_device_id": "system", 00:12:01.697 "dma_device_type": 1 00:12:01.697 }, 00:12:01.697 { 00:12:01.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.697 "dma_device_type": 2 00:12:01.697 }, 00:12:01.697 { 00:12:01.697 "dma_device_id": "system", 00:12:01.697 "dma_device_type": 1 00:12:01.697 }, 00:12:01.697 { 00:12:01.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.697 "dma_device_type": 2 00:12:01.697 } 00:12:01.697 ], 00:12:01.697 "driver_specific": { 00:12:01.697 "raid": { 00:12:01.697 "uuid": "d9058197-05c9-4563-b710-1d8c80794b58", 00:12:01.697 "strip_size_kb": 0, 00:12:01.697 "state": "online", 00:12:01.697 "raid_level": "raid1", 00:12:01.697 "superblock": true, 00:12:01.697 "num_base_bdevs": 3, 00:12:01.697 "num_base_bdevs_discovered": 3, 00:12:01.697 "num_base_bdevs_operational": 3, 00:12:01.697 "base_bdevs_list": [ 00:12:01.697 { 00:12:01.697 "name": "BaseBdev1", 00:12:01.697 "uuid": "c9b88d1a-2ab3-476a-8c1f-ad3898aef943", 00:12:01.697 "is_configured": true, 00:12:01.697 "data_offset": 2048, 00:12:01.697 "data_size": 63488 00:12:01.697 }, 00:12:01.697 { 00:12:01.697 "name": "BaseBdev2", 00:12:01.697 "uuid": "8adc4d31-5948-49c2-8c68-6f940574f410", 00:12:01.697 "is_configured": true, 00:12:01.697 "data_offset": 2048, 00:12:01.697 "data_size": 63488 00:12:01.697 }, 00:12:01.697 { 00:12:01.697 "name": "BaseBdev3", 00:12:01.697 "uuid": "19d4b129-9a82-4f78-94cf-86993bcda3ad", 00:12:01.697 "is_configured": true, 00:12:01.697 "data_offset": 2048, 00:12:01.697 "data_size": 63488 00:12:01.697 } 00:12:01.697 ] 00:12:01.697 } 00:12:01.697 } 00:12:01.697 }' 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:01.697 BaseBdev2 00:12:01.697 BaseBdev3' 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.697 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.698 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.698 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.698 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:01.698 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.698 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.698 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.698 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 [2024-11-15 10:40:32.322635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.958 "name": "Existed_Raid", 00:12:01.958 "uuid": "d9058197-05c9-4563-b710-1d8c80794b58", 00:12:01.958 "strip_size_kb": 0, 00:12:01.958 "state": "online", 00:12:01.958 "raid_level": "raid1", 00:12:01.958 "superblock": true, 00:12:01.958 "num_base_bdevs": 3, 00:12:01.958 "num_base_bdevs_discovered": 2, 00:12:01.958 "num_base_bdevs_operational": 2, 00:12:01.958 "base_bdevs_list": [ 00:12:01.958 { 00:12:01.958 "name": null, 00:12:01.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.958 "is_configured": false, 00:12:01.958 "data_offset": 0, 00:12:01.958 "data_size": 63488 00:12:01.958 }, 00:12:01.958 { 00:12:01.958 "name": "BaseBdev2", 00:12:01.958 "uuid": "8adc4d31-5948-49c2-8c68-6f940574f410", 00:12:01.958 "is_configured": true, 00:12:01.958 "data_offset": 2048, 00:12:01.958 "data_size": 63488 00:12:01.958 }, 00:12:01.958 { 00:12:01.958 "name": "BaseBdev3", 00:12:01.958 "uuid": "19d4b129-9a82-4f78-94cf-86993bcda3ad", 00:12:01.958 "is_configured": true, 00:12:01.958 "data_offset": 2048, 00:12:01.958 "data_size": 63488 00:12:01.958 } 00:12:01.958 ] 00:12:01.958 }' 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.958 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.525 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.525 [2024-11-15 10:40:32.950655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:02.525 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.525 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:02.525 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:02.525 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.525 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:02.525 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.525 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.525 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.784 [2024-11-15 10:40:33.086327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:02.784 [2024-11-15 10:40:33.086474] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.784 [2024-11-15 10:40:33.167980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.784 [2024-11-15 10:40:33.168060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.784 [2024-11-15 10:40:33.168081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.784 BaseBdev2 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.784 [ 00:12:02.784 { 00:12:02.784 "name": "BaseBdev2", 00:12:02.784 "aliases": [ 00:12:02.784 "f2f24c83-b170-4d3d-9f7b-221c0a2ce680" 00:12:02.784 ], 00:12:02.784 "product_name": "Malloc disk", 00:12:02.784 "block_size": 512, 00:12:02.784 "num_blocks": 65536, 00:12:02.784 "uuid": "f2f24c83-b170-4d3d-9f7b-221c0a2ce680", 00:12:02.784 "assigned_rate_limits": { 00:12:02.784 "rw_ios_per_sec": 0, 00:12:02.784 "rw_mbytes_per_sec": 0, 00:12:02.784 "r_mbytes_per_sec": 0, 00:12:02.784 "w_mbytes_per_sec": 0 00:12:02.784 }, 00:12:02.784 "claimed": false, 00:12:02.784 "zoned": false, 00:12:02.784 "supported_io_types": { 00:12:02.784 "read": true, 00:12:02.784 "write": true, 00:12:02.784 "unmap": true, 00:12:02.784 "flush": true, 00:12:02.784 "reset": true, 00:12:02.784 "nvme_admin": false, 00:12:02.784 "nvme_io": false, 00:12:02.784 "nvme_io_md": false, 00:12:02.784 "write_zeroes": true, 00:12:02.784 "zcopy": true, 00:12:02.784 "get_zone_info": false, 00:12:02.784 "zone_management": false, 00:12:02.784 "zone_append": false, 00:12:02.784 "compare": false, 00:12:02.784 "compare_and_write": false, 00:12:02.784 "abort": true, 00:12:02.784 "seek_hole": false, 00:12:02.784 "seek_data": false, 00:12:02.784 "copy": true, 00:12:02.784 "nvme_iov_md": false 00:12:02.784 }, 00:12:02.784 "memory_domains": [ 00:12:02.784 { 00:12:02.784 "dma_device_id": "system", 00:12:02.784 "dma_device_type": 1 00:12:02.784 }, 00:12:02.784 { 00:12:02.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.784 "dma_device_type": 2 00:12:02.784 } 00:12:02.784 ], 00:12:02.784 "driver_specific": {} 00:12:02.784 } 00:12:02.784 ] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.784 BaseBdev3 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.784 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.784 [ 00:12:02.784 { 00:12:02.784 "name": "BaseBdev3", 00:12:02.784 "aliases": [ 00:12:02.784 "e112e8e1-e08c-47cb-b972-0330457b0113" 00:12:02.784 ], 00:12:02.784 "product_name": "Malloc disk", 00:12:02.784 "block_size": 512, 00:12:02.784 "num_blocks": 65536, 00:12:02.784 "uuid": "e112e8e1-e08c-47cb-b972-0330457b0113", 00:12:02.784 "assigned_rate_limits": { 00:12:02.784 "rw_ios_per_sec": 0, 00:12:02.784 "rw_mbytes_per_sec": 0, 00:12:02.784 "r_mbytes_per_sec": 0, 00:12:02.784 "w_mbytes_per_sec": 0 00:12:02.784 }, 00:12:02.784 "claimed": false, 00:12:02.784 "zoned": false, 00:12:02.784 "supported_io_types": { 00:12:02.784 "read": true, 00:12:02.784 "write": true, 00:12:02.784 "unmap": true, 00:12:02.784 "flush": true, 00:12:02.784 "reset": true, 00:12:02.784 "nvme_admin": false, 00:12:02.784 "nvme_io": false, 00:12:02.784 "nvme_io_md": false, 00:12:03.043 "write_zeroes": true, 00:12:03.043 "zcopy": true, 00:12:03.043 "get_zone_info": false, 00:12:03.043 "zone_management": false, 00:12:03.043 "zone_append": false, 00:12:03.043 "compare": false, 00:12:03.043 "compare_and_write": false, 00:12:03.043 "abort": true, 00:12:03.043 "seek_hole": false, 00:12:03.043 "seek_data": false, 00:12:03.043 "copy": true, 00:12:03.043 "nvme_iov_md": false 00:12:03.043 }, 00:12:03.043 "memory_domains": [ 00:12:03.043 { 00:12:03.043 "dma_device_id": "system", 00:12:03.043 "dma_device_type": 1 00:12:03.043 }, 00:12:03.043 { 00:12:03.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.043 "dma_device_type": 2 00:12:03.043 } 00:12:03.043 ], 00:12:03.043 "driver_specific": {} 00:12:03.043 } 00:12:03.043 ] 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.043 [2024-11-15 10:40:33.351111] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.043 [2024-11-15 10:40:33.351172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.043 [2024-11-15 10:40:33.351201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.043 [2024-11-15 10:40:33.353448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.043 "name": "Existed_Raid", 00:12:03.043 "uuid": "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0", 00:12:03.043 "strip_size_kb": 0, 00:12:03.043 "state": "configuring", 00:12:03.043 "raid_level": "raid1", 00:12:03.043 "superblock": true, 00:12:03.043 "num_base_bdevs": 3, 00:12:03.043 "num_base_bdevs_discovered": 2, 00:12:03.043 "num_base_bdevs_operational": 3, 00:12:03.043 "base_bdevs_list": [ 00:12:03.043 { 00:12:03.043 "name": "BaseBdev1", 00:12:03.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.043 "is_configured": false, 00:12:03.043 "data_offset": 0, 00:12:03.043 "data_size": 0 00:12:03.043 }, 00:12:03.043 { 00:12:03.043 "name": "BaseBdev2", 00:12:03.043 "uuid": "f2f24c83-b170-4d3d-9f7b-221c0a2ce680", 00:12:03.043 "is_configured": true, 00:12:03.043 "data_offset": 2048, 00:12:03.043 "data_size": 63488 00:12:03.043 }, 00:12:03.043 { 00:12:03.043 "name": "BaseBdev3", 00:12:03.043 "uuid": "e112e8e1-e08c-47cb-b972-0330457b0113", 00:12:03.043 "is_configured": true, 00:12:03.043 "data_offset": 2048, 00:12:03.043 "data_size": 63488 00:12:03.043 } 00:12:03.043 ] 00:12:03.043 }' 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.043 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.610 [2024-11-15 10:40:33.875290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.610 "name": "Existed_Raid", 00:12:03.610 "uuid": "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0", 00:12:03.610 "strip_size_kb": 0, 00:12:03.610 "state": "configuring", 00:12:03.610 "raid_level": "raid1", 00:12:03.610 "superblock": true, 00:12:03.610 "num_base_bdevs": 3, 00:12:03.610 "num_base_bdevs_discovered": 1, 00:12:03.610 "num_base_bdevs_operational": 3, 00:12:03.610 "base_bdevs_list": [ 00:12:03.610 { 00:12:03.610 "name": "BaseBdev1", 00:12:03.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.610 "is_configured": false, 00:12:03.610 "data_offset": 0, 00:12:03.610 "data_size": 0 00:12:03.610 }, 00:12:03.610 { 00:12:03.610 "name": null, 00:12:03.610 "uuid": "f2f24c83-b170-4d3d-9f7b-221c0a2ce680", 00:12:03.610 "is_configured": false, 00:12:03.610 "data_offset": 0, 00:12:03.610 "data_size": 63488 00:12:03.610 }, 00:12:03.610 { 00:12:03.610 "name": "BaseBdev3", 00:12:03.610 "uuid": "e112e8e1-e08c-47cb-b972-0330457b0113", 00:12:03.610 "is_configured": true, 00:12:03.610 "data_offset": 2048, 00:12:03.610 "data_size": 63488 00:12:03.610 } 00:12:03.610 ] 00:12:03.610 }' 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.610 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.869 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.869 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.869 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.869 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:04.128 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.128 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:04.128 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:04.128 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.128 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.128 [2024-11-15 10:40:34.497041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.128 BaseBdev1 00:12:04.128 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.128 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:04.128 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 [ 00:12:04.129 { 00:12:04.129 "name": "BaseBdev1", 00:12:04.129 "aliases": [ 00:12:04.129 "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6" 00:12:04.129 ], 00:12:04.129 "product_name": "Malloc disk", 00:12:04.129 "block_size": 512, 00:12:04.129 "num_blocks": 65536, 00:12:04.129 "uuid": "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6", 00:12:04.129 "assigned_rate_limits": { 00:12:04.129 "rw_ios_per_sec": 0, 00:12:04.129 "rw_mbytes_per_sec": 0, 00:12:04.129 "r_mbytes_per_sec": 0, 00:12:04.129 "w_mbytes_per_sec": 0 00:12:04.129 }, 00:12:04.129 "claimed": true, 00:12:04.129 "claim_type": "exclusive_write", 00:12:04.129 "zoned": false, 00:12:04.129 "supported_io_types": { 00:12:04.129 "read": true, 00:12:04.129 "write": true, 00:12:04.129 "unmap": true, 00:12:04.129 "flush": true, 00:12:04.129 "reset": true, 00:12:04.129 "nvme_admin": false, 00:12:04.129 "nvme_io": false, 00:12:04.129 "nvme_io_md": false, 00:12:04.129 "write_zeroes": true, 00:12:04.129 "zcopy": true, 00:12:04.129 "get_zone_info": false, 00:12:04.129 "zone_management": false, 00:12:04.129 "zone_append": false, 00:12:04.129 "compare": false, 00:12:04.129 "compare_and_write": false, 00:12:04.129 "abort": true, 00:12:04.129 "seek_hole": false, 00:12:04.129 "seek_data": false, 00:12:04.129 "copy": true, 00:12:04.129 "nvme_iov_md": false 00:12:04.129 }, 00:12:04.129 "memory_domains": [ 00:12:04.129 { 00:12:04.129 "dma_device_id": "system", 00:12:04.129 "dma_device_type": 1 00:12:04.129 }, 00:12:04.129 { 00:12:04.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.129 "dma_device_type": 2 00:12:04.129 } 00:12:04.129 ], 00:12:04.129 "driver_specific": {} 00:12:04.129 } 00:12:04.129 ] 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.129 "name": "Existed_Raid", 00:12:04.129 "uuid": "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0", 00:12:04.129 "strip_size_kb": 0, 00:12:04.129 "state": "configuring", 00:12:04.129 "raid_level": "raid1", 00:12:04.129 "superblock": true, 00:12:04.129 "num_base_bdevs": 3, 00:12:04.129 "num_base_bdevs_discovered": 2, 00:12:04.129 "num_base_bdevs_operational": 3, 00:12:04.129 "base_bdevs_list": [ 00:12:04.129 { 00:12:04.129 "name": "BaseBdev1", 00:12:04.129 "uuid": "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6", 00:12:04.129 "is_configured": true, 00:12:04.129 "data_offset": 2048, 00:12:04.129 "data_size": 63488 00:12:04.129 }, 00:12:04.129 { 00:12:04.129 "name": null, 00:12:04.129 "uuid": "f2f24c83-b170-4d3d-9f7b-221c0a2ce680", 00:12:04.129 "is_configured": false, 00:12:04.129 "data_offset": 0, 00:12:04.129 "data_size": 63488 00:12:04.129 }, 00:12:04.129 { 00:12:04.129 "name": "BaseBdev3", 00:12:04.129 "uuid": "e112e8e1-e08c-47cb-b972-0330457b0113", 00:12:04.129 "is_configured": true, 00:12:04.129 "data_offset": 2048, 00:12:04.129 "data_size": 63488 00:12:04.129 } 00:12:04.129 ] 00:12:04.129 }' 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.129 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.694 [2024-11-15 10:40:35.129259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.694 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.695 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.695 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.695 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.695 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.695 "name": "Existed_Raid", 00:12:04.695 "uuid": "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0", 00:12:04.695 "strip_size_kb": 0, 00:12:04.695 "state": "configuring", 00:12:04.695 "raid_level": "raid1", 00:12:04.695 "superblock": true, 00:12:04.695 "num_base_bdevs": 3, 00:12:04.695 "num_base_bdevs_discovered": 1, 00:12:04.695 "num_base_bdevs_operational": 3, 00:12:04.695 "base_bdevs_list": [ 00:12:04.695 { 00:12:04.695 "name": "BaseBdev1", 00:12:04.695 "uuid": "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6", 00:12:04.695 "is_configured": true, 00:12:04.695 "data_offset": 2048, 00:12:04.695 "data_size": 63488 00:12:04.695 }, 00:12:04.695 { 00:12:04.695 "name": null, 00:12:04.695 "uuid": "f2f24c83-b170-4d3d-9f7b-221c0a2ce680", 00:12:04.695 "is_configured": false, 00:12:04.695 "data_offset": 0, 00:12:04.695 "data_size": 63488 00:12:04.695 }, 00:12:04.695 { 00:12:04.695 "name": null, 00:12:04.695 "uuid": "e112e8e1-e08c-47cb-b972-0330457b0113", 00:12:04.695 "is_configured": false, 00:12:04.695 "data_offset": 0, 00:12:04.695 "data_size": 63488 00:12:04.695 } 00:12:04.695 ] 00:12:04.695 }' 00:12:04.695 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.695 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.262 [2024-11-15 10:40:35.701475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.262 "name": "Existed_Raid", 00:12:05.262 "uuid": "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0", 00:12:05.262 "strip_size_kb": 0, 00:12:05.262 "state": "configuring", 00:12:05.262 "raid_level": "raid1", 00:12:05.262 "superblock": true, 00:12:05.262 "num_base_bdevs": 3, 00:12:05.262 "num_base_bdevs_discovered": 2, 00:12:05.262 "num_base_bdevs_operational": 3, 00:12:05.262 "base_bdevs_list": [ 00:12:05.262 { 00:12:05.262 "name": "BaseBdev1", 00:12:05.262 "uuid": "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6", 00:12:05.262 "is_configured": true, 00:12:05.262 "data_offset": 2048, 00:12:05.262 "data_size": 63488 00:12:05.262 }, 00:12:05.262 { 00:12:05.262 "name": null, 00:12:05.262 "uuid": "f2f24c83-b170-4d3d-9f7b-221c0a2ce680", 00:12:05.262 "is_configured": false, 00:12:05.262 "data_offset": 0, 00:12:05.262 "data_size": 63488 00:12:05.262 }, 00:12:05.262 { 00:12:05.262 "name": "BaseBdev3", 00:12:05.262 "uuid": "e112e8e1-e08c-47cb-b972-0330457b0113", 00:12:05.262 "is_configured": true, 00:12:05.262 "data_offset": 2048, 00:12:05.262 "data_size": 63488 00:12:05.262 } 00:12:05.262 ] 00:12:05.262 }' 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.262 10:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.829 [2024-11-15 10:40:36.257613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.829 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.087 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.087 "name": "Existed_Raid", 00:12:06.087 "uuid": "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0", 00:12:06.087 "strip_size_kb": 0, 00:12:06.087 "state": "configuring", 00:12:06.087 "raid_level": "raid1", 00:12:06.087 "superblock": true, 00:12:06.087 "num_base_bdevs": 3, 00:12:06.087 "num_base_bdevs_discovered": 1, 00:12:06.087 "num_base_bdevs_operational": 3, 00:12:06.087 "base_bdevs_list": [ 00:12:06.087 { 00:12:06.087 "name": null, 00:12:06.087 "uuid": "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6", 00:12:06.087 "is_configured": false, 00:12:06.087 "data_offset": 0, 00:12:06.087 "data_size": 63488 00:12:06.087 }, 00:12:06.087 { 00:12:06.087 "name": null, 00:12:06.087 "uuid": "f2f24c83-b170-4d3d-9f7b-221c0a2ce680", 00:12:06.087 "is_configured": false, 00:12:06.087 "data_offset": 0, 00:12:06.087 "data_size": 63488 00:12:06.087 }, 00:12:06.087 { 00:12:06.087 "name": "BaseBdev3", 00:12:06.087 "uuid": "e112e8e1-e08c-47cb-b972-0330457b0113", 00:12:06.087 "is_configured": true, 00:12:06.087 "data_offset": 2048, 00:12:06.087 "data_size": 63488 00:12:06.087 } 00:12:06.087 ] 00:12:06.087 }' 00:12:06.087 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.087 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.346 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.346 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.346 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.346 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:06.346 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.604 [2024-11-15 10:40:36.941251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.604 10:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.604 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.604 "name": "Existed_Raid", 00:12:06.604 "uuid": "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0", 00:12:06.604 "strip_size_kb": 0, 00:12:06.604 "state": "configuring", 00:12:06.604 "raid_level": "raid1", 00:12:06.604 "superblock": true, 00:12:06.604 "num_base_bdevs": 3, 00:12:06.604 "num_base_bdevs_discovered": 2, 00:12:06.604 "num_base_bdevs_operational": 3, 00:12:06.604 "base_bdevs_list": [ 00:12:06.604 { 00:12:06.604 "name": null, 00:12:06.604 "uuid": "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6", 00:12:06.604 "is_configured": false, 00:12:06.604 "data_offset": 0, 00:12:06.604 "data_size": 63488 00:12:06.604 }, 00:12:06.604 { 00:12:06.604 "name": "BaseBdev2", 00:12:06.604 "uuid": "f2f24c83-b170-4d3d-9f7b-221c0a2ce680", 00:12:06.604 "is_configured": true, 00:12:06.604 "data_offset": 2048, 00:12:06.604 "data_size": 63488 00:12:06.604 }, 00:12:06.604 { 00:12:06.604 "name": "BaseBdev3", 00:12:06.604 "uuid": "e112e8e1-e08c-47cb-b972-0330457b0113", 00:12:06.604 "is_configured": true, 00:12:06.604 "data_offset": 2048, 00:12:06.604 "data_size": 63488 00:12:06.604 } 00:12:06.605 ] 00:12:06.605 }' 00:12:06.605 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.605 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u afd941ee-8d4f-42a6-aa0d-be6f60cec8b6 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.171 [2024-11-15 10:40:37.578687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:07.171 [2024-11-15 10:40:37.578967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:07.171 [2024-11-15 10:40:37.578986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:07.171 [2024-11-15 10:40:37.579293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:07.171 NewBaseBdev 00:12:07.171 [2024-11-15 10:40:37.579505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:07.171 [2024-11-15 10:40:37.579527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:07.171 [2024-11-15 10:40:37.579688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:07.171 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.172 [ 00:12:07.172 { 00:12:07.172 "name": "NewBaseBdev", 00:12:07.172 "aliases": [ 00:12:07.172 "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6" 00:12:07.172 ], 00:12:07.172 "product_name": "Malloc disk", 00:12:07.172 "block_size": 512, 00:12:07.172 "num_blocks": 65536, 00:12:07.172 "uuid": "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6", 00:12:07.172 "assigned_rate_limits": { 00:12:07.172 "rw_ios_per_sec": 0, 00:12:07.172 "rw_mbytes_per_sec": 0, 00:12:07.172 "r_mbytes_per_sec": 0, 00:12:07.172 "w_mbytes_per_sec": 0 00:12:07.172 }, 00:12:07.172 "claimed": true, 00:12:07.172 "claim_type": "exclusive_write", 00:12:07.172 "zoned": false, 00:12:07.172 "supported_io_types": { 00:12:07.172 "read": true, 00:12:07.172 "write": true, 00:12:07.172 "unmap": true, 00:12:07.172 "flush": true, 00:12:07.172 "reset": true, 00:12:07.172 "nvme_admin": false, 00:12:07.172 "nvme_io": false, 00:12:07.172 "nvme_io_md": false, 00:12:07.172 "write_zeroes": true, 00:12:07.172 "zcopy": true, 00:12:07.172 "get_zone_info": false, 00:12:07.172 "zone_management": false, 00:12:07.172 "zone_append": false, 00:12:07.172 "compare": false, 00:12:07.172 "compare_and_write": false, 00:12:07.172 "abort": true, 00:12:07.172 "seek_hole": false, 00:12:07.172 "seek_data": false, 00:12:07.172 "copy": true, 00:12:07.172 "nvme_iov_md": false 00:12:07.172 }, 00:12:07.172 "memory_domains": [ 00:12:07.172 { 00:12:07.172 "dma_device_id": "system", 00:12:07.172 "dma_device_type": 1 00:12:07.172 }, 00:12:07.172 { 00:12:07.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.172 "dma_device_type": 2 00:12:07.172 } 00:12:07.172 ], 00:12:07.172 "driver_specific": {} 00:12:07.172 } 00:12:07.172 ] 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.172 "name": "Existed_Raid", 00:12:07.172 "uuid": "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0", 00:12:07.172 "strip_size_kb": 0, 00:12:07.172 "state": "online", 00:12:07.172 "raid_level": "raid1", 00:12:07.172 "superblock": true, 00:12:07.172 "num_base_bdevs": 3, 00:12:07.172 "num_base_bdevs_discovered": 3, 00:12:07.172 "num_base_bdevs_operational": 3, 00:12:07.172 "base_bdevs_list": [ 00:12:07.172 { 00:12:07.172 "name": "NewBaseBdev", 00:12:07.172 "uuid": "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6", 00:12:07.172 "is_configured": true, 00:12:07.172 "data_offset": 2048, 00:12:07.172 "data_size": 63488 00:12:07.172 }, 00:12:07.172 { 00:12:07.172 "name": "BaseBdev2", 00:12:07.172 "uuid": "f2f24c83-b170-4d3d-9f7b-221c0a2ce680", 00:12:07.172 "is_configured": true, 00:12:07.172 "data_offset": 2048, 00:12:07.172 "data_size": 63488 00:12:07.172 }, 00:12:07.172 { 00:12:07.172 "name": "BaseBdev3", 00:12:07.172 "uuid": "e112e8e1-e08c-47cb-b972-0330457b0113", 00:12:07.172 "is_configured": true, 00:12:07.172 "data_offset": 2048, 00:12:07.172 "data_size": 63488 00:12:07.172 } 00:12:07.172 ] 00:12:07.172 }' 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.172 10:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:07.760 [2024-11-15 10:40:38.127271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.760 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:07.760 "name": "Existed_Raid", 00:12:07.760 "aliases": [ 00:12:07.760 "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0" 00:12:07.760 ], 00:12:07.760 "product_name": "Raid Volume", 00:12:07.760 "block_size": 512, 00:12:07.760 "num_blocks": 63488, 00:12:07.760 "uuid": "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0", 00:12:07.760 "assigned_rate_limits": { 00:12:07.760 "rw_ios_per_sec": 0, 00:12:07.760 "rw_mbytes_per_sec": 0, 00:12:07.760 "r_mbytes_per_sec": 0, 00:12:07.760 "w_mbytes_per_sec": 0 00:12:07.760 }, 00:12:07.760 "claimed": false, 00:12:07.760 "zoned": false, 00:12:07.760 "supported_io_types": { 00:12:07.760 "read": true, 00:12:07.760 "write": true, 00:12:07.760 "unmap": false, 00:12:07.760 "flush": false, 00:12:07.760 "reset": true, 00:12:07.760 "nvme_admin": false, 00:12:07.760 "nvme_io": false, 00:12:07.760 "nvme_io_md": false, 00:12:07.760 "write_zeroes": true, 00:12:07.760 "zcopy": false, 00:12:07.760 "get_zone_info": false, 00:12:07.760 "zone_management": false, 00:12:07.760 "zone_append": false, 00:12:07.760 "compare": false, 00:12:07.760 "compare_and_write": false, 00:12:07.760 "abort": false, 00:12:07.760 "seek_hole": false, 00:12:07.760 "seek_data": false, 00:12:07.760 "copy": false, 00:12:07.760 "nvme_iov_md": false 00:12:07.760 }, 00:12:07.760 "memory_domains": [ 00:12:07.760 { 00:12:07.760 "dma_device_id": "system", 00:12:07.760 "dma_device_type": 1 00:12:07.760 }, 00:12:07.761 { 00:12:07.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.761 "dma_device_type": 2 00:12:07.761 }, 00:12:07.761 { 00:12:07.761 "dma_device_id": "system", 00:12:07.761 "dma_device_type": 1 00:12:07.761 }, 00:12:07.761 { 00:12:07.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.761 "dma_device_type": 2 00:12:07.761 }, 00:12:07.761 { 00:12:07.761 "dma_device_id": "system", 00:12:07.761 "dma_device_type": 1 00:12:07.761 }, 00:12:07.761 { 00:12:07.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.761 "dma_device_type": 2 00:12:07.761 } 00:12:07.761 ], 00:12:07.761 "driver_specific": { 00:12:07.761 "raid": { 00:12:07.761 "uuid": "9c79fd7e-ba4e-4e31-88f8-54e0c61043d0", 00:12:07.761 "strip_size_kb": 0, 00:12:07.761 "state": "online", 00:12:07.761 "raid_level": "raid1", 00:12:07.761 "superblock": true, 00:12:07.761 "num_base_bdevs": 3, 00:12:07.761 "num_base_bdevs_discovered": 3, 00:12:07.761 "num_base_bdevs_operational": 3, 00:12:07.761 "base_bdevs_list": [ 00:12:07.761 { 00:12:07.761 "name": "NewBaseBdev", 00:12:07.761 "uuid": "afd941ee-8d4f-42a6-aa0d-be6f60cec8b6", 00:12:07.761 "is_configured": true, 00:12:07.761 "data_offset": 2048, 00:12:07.761 "data_size": 63488 00:12:07.761 }, 00:12:07.761 { 00:12:07.761 "name": "BaseBdev2", 00:12:07.761 "uuid": "f2f24c83-b170-4d3d-9f7b-221c0a2ce680", 00:12:07.761 "is_configured": true, 00:12:07.761 "data_offset": 2048, 00:12:07.761 "data_size": 63488 00:12:07.761 }, 00:12:07.761 { 00:12:07.761 "name": "BaseBdev3", 00:12:07.761 "uuid": "e112e8e1-e08c-47cb-b972-0330457b0113", 00:12:07.761 "is_configured": true, 00:12:07.761 "data_offset": 2048, 00:12:07.761 "data_size": 63488 00:12:07.761 } 00:12:07.761 ] 00:12:07.761 } 00:12:07.761 } 00:12:07.761 }' 00:12:07.761 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.761 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:07.761 BaseBdev2 00:12:07.761 BaseBdev3' 00:12:07.761 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.761 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:07.761 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.761 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:07.761 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.761 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.761 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.761 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.019 [2024-11-15 10:40:38.442952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:08.019 [2024-11-15 10:40:38.443124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.019 [2024-11-15 10:40:38.443222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.019 [2024-11-15 10:40:38.443609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.019 [2024-11-15 10:40:38.443641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68277 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68277 ']' 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68277 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68277 00:12:08.019 killing process with pid 68277 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:08.019 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:08.020 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68277' 00:12:08.020 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68277 00:12:08.020 [2024-11-15 10:40:38.480529] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.020 10:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68277 00:12:08.278 [2024-11-15 10:40:38.731760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.211 10:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:09.211 00:12:09.211 real 0m11.583s 00:12:09.211 user 0m19.479s 00:12:09.211 sys 0m1.443s 00:12:09.211 ************************************ 00:12:09.211 END TEST raid_state_function_test_sb 00:12:09.211 ************************************ 00:12:09.211 10:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:09.211 10:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.211 10:40:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:09.211 10:40:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:09.211 10:40:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:09.211 10:40:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.211 ************************************ 00:12:09.211 START TEST raid_superblock_test 00:12:09.211 ************************************ 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:09.211 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:09.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68904 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68904 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68904 ']' 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:09.469 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.469 [2024-11-15 10:40:39.872035] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:12:09.469 [2024-11-15 10:40:39.872214] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68904 ] 00:12:09.726 [2024-11-15 10:40:40.051163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.726 [2024-11-15 10:40:40.156180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.983 [2024-11-15 10:40:40.354822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.983 [2024-11-15 10:40:40.354895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.551 malloc1 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.551 [2024-11-15 10:40:40.977117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:10.551 [2024-11-15 10:40:40.977338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.551 [2024-11-15 10:40:40.977519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:10.551 [2024-11-15 10:40:40.977643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.551 [2024-11-15 10:40:40.980505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.551 [2024-11-15 10:40:40.980676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:10.551 pt1 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.551 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.551 malloc2 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.551 [2024-11-15 10:40:41.026193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:10.551 [2024-11-15 10:40:41.026263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.551 [2024-11-15 10:40:41.026300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:10.551 [2024-11-15 10:40:41.026316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.551 [2024-11-15 10:40:41.028933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.551 [2024-11-15 10:40:41.028981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:10.551 pt2 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.551 malloc3 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.551 [2024-11-15 10:40:41.091711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:10.551 [2024-11-15 10:40:41.091906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.551 [2024-11-15 10:40:41.091953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:10.551 [2024-11-15 10:40:41.091971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.551 [2024-11-15 10:40:41.094525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.551 [2024-11-15 10:40:41.094571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:10.551 pt3 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.551 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.551 [2024-11-15 10:40:41.099763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:10.551 [2024-11-15 10:40:41.102016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.551 [2024-11-15 10:40:41.102119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:10.551 [2024-11-15 10:40:41.102336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:10.551 [2024-11-15 10:40:41.102395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.552 [2024-11-15 10:40:41.102702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:10.552 [2024-11-15 10:40:41.103121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:10.552 [2024-11-15 10:40:41.103151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:10.552 [2024-11-15 10:40:41.103365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.552 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.811 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.811 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.811 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.811 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.811 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.811 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.811 "name": "raid_bdev1", 00:12:10.811 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:10.811 "strip_size_kb": 0, 00:12:10.811 "state": "online", 00:12:10.811 "raid_level": "raid1", 00:12:10.811 "superblock": true, 00:12:10.811 "num_base_bdevs": 3, 00:12:10.811 "num_base_bdevs_discovered": 3, 00:12:10.811 "num_base_bdevs_operational": 3, 00:12:10.811 "base_bdevs_list": [ 00:12:10.811 { 00:12:10.811 "name": "pt1", 00:12:10.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:10.811 "is_configured": true, 00:12:10.811 "data_offset": 2048, 00:12:10.811 "data_size": 63488 00:12:10.811 }, 00:12:10.811 { 00:12:10.811 "name": "pt2", 00:12:10.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.811 "is_configured": true, 00:12:10.811 "data_offset": 2048, 00:12:10.811 "data_size": 63488 00:12:10.811 }, 00:12:10.811 { 00:12:10.811 "name": "pt3", 00:12:10.811 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.811 "is_configured": true, 00:12:10.811 "data_offset": 2048, 00:12:10.811 "data_size": 63488 00:12:10.811 } 00:12:10.811 ] 00:12:10.811 }' 00:12:10.811 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.811 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.069 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:11.069 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:11.069 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:11.069 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:11.069 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:11.069 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:11.069 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:11.069 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.069 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.069 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:11.069 [2024-11-15 10:40:41.624458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.327 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.327 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:11.327 "name": "raid_bdev1", 00:12:11.327 "aliases": [ 00:12:11.327 "1a565ea0-9139-41e0-8488-e45abe8b2dd7" 00:12:11.327 ], 00:12:11.327 "product_name": "Raid Volume", 00:12:11.327 "block_size": 512, 00:12:11.327 "num_blocks": 63488, 00:12:11.327 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:11.327 "assigned_rate_limits": { 00:12:11.327 "rw_ios_per_sec": 0, 00:12:11.327 "rw_mbytes_per_sec": 0, 00:12:11.327 "r_mbytes_per_sec": 0, 00:12:11.327 "w_mbytes_per_sec": 0 00:12:11.327 }, 00:12:11.327 "claimed": false, 00:12:11.327 "zoned": false, 00:12:11.327 "supported_io_types": { 00:12:11.327 "read": true, 00:12:11.327 "write": true, 00:12:11.327 "unmap": false, 00:12:11.328 "flush": false, 00:12:11.328 "reset": true, 00:12:11.328 "nvme_admin": false, 00:12:11.328 "nvme_io": false, 00:12:11.328 "nvme_io_md": false, 00:12:11.328 "write_zeroes": true, 00:12:11.328 "zcopy": false, 00:12:11.328 "get_zone_info": false, 00:12:11.328 "zone_management": false, 00:12:11.328 "zone_append": false, 00:12:11.328 "compare": false, 00:12:11.328 "compare_and_write": false, 00:12:11.328 "abort": false, 00:12:11.328 "seek_hole": false, 00:12:11.328 "seek_data": false, 00:12:11.328 "copy": false, 00:12:11.328 "nvme_iov_md": false 00:12:11.328 }, 00:12:11.328 "memory_domains": [ 00:12:11.328 { 00:12:11.328 "dma_device_id": "system", 00:12:11.328 "dma_device_type": 1 00:12:11.328 }, 00:12:11.328 { 00:12:11.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.328 "dma_device_type": 2 00:12:11.328 }, 00:12:11.328 { 00:12:11.328 "dma_device_id": "system", 00:12:11.328 "dma_device_type": 1 00:12:11.328 }, 00:12:11.328 { 00:12:11.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.328 "dma_device_type": 2 00:12:11.328 }, 00:12:11.328 { 00:12:11.328 "dma_device_id": "system", 00:12:11.328 "dma_device_type": 1 00:12:11.328 }, 00:12:11.328 { 00:12:11.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.328 "dma_device_type": 2 00:12:11.328 } 00:12:11.328 ], 00:12:11.328 "driver_specific": { 00:12:11.328 "raid": { 00:12:11.328 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:11.328 "strip_size_kb": 0, 00:12:11.328 "state": "online", 00:12:11.328 "raid_level": "raid1", 00:12:11.328 "superblock": true, 00:12:11.328 "num_base_bdevs": 3, 00:12:11.328 "num_base_bdevs_discovered": 3, 00:12:11.328 "num_base_bdevs_operational": 3, 00:12:11.328 "base_bdevs_list": [ 00:12:11.328 { 00:12:11.328 "name": "pt1", 00:12:11.328 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:11.328 "is_configured": true, 00:12:11.328 "data_offset": 2048, 00:12:11.328 "data_size": 63488 00:12:11.328 }, 00:12:11.328 { 00:12:11.328 "name": "pt2", 00:12:11.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.328 "is_configured": true, 00:12:11.328 "data_offset": 2048, 00:12:11.328 "data_size": 63488 00:12:11.328 }, 00:12:11.328 { 00:12:11.328 "name": "pt3", 00:12:11.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.328 "is_configured": true, 00:12:11.328 "data_offset": 2048, 00:12:11.328 "data_size": 63488 00:12:11.328 } 00:12:11.328 ] 00:12:11.328 } 00:12:11.328 } 00:12:11.328 }' 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:11.328 pt2 00:12:11.328 pt3' 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.328 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:11.586 [2024-11-15 10:40:41.940432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.586 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.587 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1a565ea0-9139-41e0-8488-e45abe8b2dd7 00:12:11.587 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1a565ea0-9139-41e0-8488-e45abe8b2dd7 ']' 00:12:11.587 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:11.587 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.587 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.587 [2024-11-15 10:40:41.992095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.587 [2024-11-15 10:40:41.992129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.587 [2024-11-15 10:40:41.992214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.587 [2024-11-15 10:40:41.992313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.587 [2024-11-15 10:40:41.992330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:11.587 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.587 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.587 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:11.587 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.587 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.587 [2024-11-15 10:40:42.132183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:11.587 [2024-11-15 10:40:42.134587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:11.587 [2024-11-15 10:40:42.134670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:11.587 [2024-11-15 10:40:42.134745] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:11.587 [2024-11-15 10:40:42.134818] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:11.587 [2024-11-15 10:40:42.134851] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:11.587 [2024-11-15 10:40:42.134878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.587 [2024-11-15 10:40:42.134892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:11.587 request: 00:12:11.587 { 00:12:11.587 "name": "raid_bdev1", 00:12:11.587 "raid_level": "raid1", 00:12:11.587 "base_bdevs": [ 00:12:11.587 "malloc1", 00:12:11.587 "malloc2", 00:12:11.587 "malloc3" 00:12:11.587 ], 00:12:11.587 "superblock": false, 00:12:11.587 "method": "bdev_raid_create", 00:12:11.587 "req_id": 1 00:12:11.587 } 00:12:11.587 Got JSON-RPC error response 00:12:11.587 response: 00:12:11.587 { 00:12:11.587 "code": -17, 00:12:11.587 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:11.587 } 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.587 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.846 [2024-11-15 10:40:42.204147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:11.846 [2024-11-15 10:40:42.204325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.846 [2024-11-15 10:40:42.204420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:11.846 [2024-11-15 10:40:42.204548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.846 [2024-11-15 10:40:42.207190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.846 [2024-11-15 10:40:42.207232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:11.846 [2024-11-15 10:40:42.207326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:11.846 [2024-11-15 10:40:42.207530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:11.846 pt1 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.846 "name": "raid_bdev1", 00:12:11.846 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:11.846 "strip_size_kb": 0, 00:12:11.846 "state": "configuring", 00:12:11.846 "raid_level": "raid1", 00:12:11.846 "superblock": true, 00:12:11.846 "num_base_bdevs": 3, 00:12:11.846 "num_base_bdevs_discovered": 1, 00:12:11.846 "num_base_bdevs_operational": 3, 00:12:11.846 "base_bdevs_list": [ 00:12:11.846 { 00:12:11.846 "name": "pt1", 00:12:11.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:11.846 "is_configured": true, 00:12:11.846 "data_offset": 2048, 00:12:11.846 "data_size": 63488 00:12:11.846 }, 00:12:11.846 { 00:12:11.846 "name": null, 00:12:11.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.846 "is_configured": false, 00:12:11.846 "data_offset": 2048, 00:12:11.846 "data_size": 63488 00:12:11.846 }, 00:12:11.846 { 00:12:11.846 "name": null, 00:12:11.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.846 "is_configured": false, 00:12:11.846 "data_offset": 2048, 00:12:11.846 "data_size": 63488 00:12:11.846 } 00:12:11.846 ] 00:12:11.846 }' 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.846 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.414 [2024-11-15 10:40:42.700318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:12.414 [2024-11-15 10:40:42.700409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.414 [2024-11-15 10:40:42.700444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:12.414 [2024-11-15 10:40:42.700459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.414 [2024-11-15 10:40:42.700988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.414 [2024-11-15 10:40:42.701021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:12.414 [2024-11-15 10:40:42.701128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:12.414 [2024-11-15 10:40:42.701168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:12.414 pt2 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.414 [2024-11-15 10:40:42.708321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.414 "name": "raid_bdev1", 00:12:12.414 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:12.414 "strip_size_kb": 0, 00:12:12.414 "state": "configuring", 00:12:12.414 "raid_level": "raid1", 00:12:12.414 "superblock": true, 00:12:12.414 "num_base_bdevs": 3, 00:12:12.414 "num_base_bdevs_discovered": 1, 00:12:12.414 "num_base_bdevs_operational": 3, 00:12:12.414 "base_bdevs_list": [ 00:12:12.414 { 00:12:12.414 "name": "pt1", 00:12:12.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.414 "is_configured": true, 00:12:12.414 "data_offset": 2048, 00:12:12.414 "data_size": 63488 00:12:12.414 }, 00:12:12.414 { 00:12:12.414 "name": null, 00:12:12.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.414 "is_configured": false, 00:12:12.414 "data_offset": 0, 00:12:12.414 "data_size": 63488 00:12:12.414 }, 00:12:12.414 { 00:12:12.414 "name": null, 00:12:12.414 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.414 "is_configured": false, 00:12:12.414 "data_offset": 2048, 00:12:12.414 "data_size": 63488 00:12:12.414 } 00:12:12.414 ] 00:12:12.414 }' 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.414 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.673 [2024-11-15 10:40:43.156429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:12.673 [2024-11-15 10:40:43.156519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.673 [2024-11-15 10:40:43.156551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:12.673 [2024-11-15 10:40:43.156568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.673 [2024-11-15 10:40:43.157122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.673 [2024-11-15 10:40:43.157160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:12.673 [2024-11-15 10:40:43.157259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:12.673 [2024-11-15 10:40:43.157309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:12.673 pt2 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.673 [2024-11-15 10:40:43.164403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:12.673 [2024-11-15 10:40:43.164460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.673 [2024-11-15 10:40:43.164483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:12.673 [2024-11-15 10:40:43.164498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.673 [2024-11-15 10:40:43.164958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.673 [2024-11-15 10:40:43.165010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:12.673 [2024-11-15 10:40:43.165088] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:12.673 [2024-11-15 10:40:43.165122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:12.673 [2024-11-15 10:40:43.165288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:12.673 [2024-11-15 10:40:43.165312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:12.673 [2024-11-15 10:40:43.165625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:12.673 [2024-11-15 10:40:43.165838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:12.673 [2024-11-15 10:40:43.165854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:12.673 [2024-11-15 10:40:43.166023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.673 pt3 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.673 "name": "raid_bdev1", 00:12:12.673 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:12.673 "strip_size_kb": 0, 00:12:12.673 "state": "online", 00:12:12.673 "raid_level": "raid1", 00:12:12.673 "superblock": true, 00:12:12.673 "num_base_bdevs": 3, 00:12:12.673 "num_base_bdevs_discovered": 3, 00:12:12.673 "num_base_bdevs_operational": 3, 00:12:12.673 "base_bdevs_list": [ 00:12:12.673 { 00:12:12.673 "name": "pt1", 00:12:12.673 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.673 "is_configured": true, 00:12:12.673 "data_offset": 2048, 00:12:12.673 "data_size": 63488 00:12:12.673 }, 00:12:12.673 { 00:12:12.673 "name": "pt2", 00:12:12.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.673 "is_configured": true, 00:12:12.673 "data_offset": 2048, 00:12:12.673 "data_size": 63488 00:12:12.673 }, 00:12:12.673 { 00:12:12.673 "name": "pt3", 00:12:12.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.673 "is_configured": true, 00:12:12.673 "data_offset": 2048, 00:12:12.673 "data_size": 63488 00:12:12.673 } 00:12:12.673 ] 00:12:12.673 }' 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.673 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.240 [2024-11-15 10:40:43.684962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:13.240 "name": "raid_bdev1", 00:12:13.240 "aliases": [ 00:12:13.240 "1a565ea0-9139-41e0-8488-e45abe8b2dd7" 00:12:13.240 ], 00:12:13.240 "product_name": "Raid Volume", 00:12:13.240 "block_size": 512, 00:12:13.240 "num_blocks": 63488, 00:12:13.240 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:13.240 "assigned_rate_limits": { 00:12:13.240 "rw_ios_per_sec": 0, 00:12:13.240 "rw_mbytes_per_sec": 0, 00:12:13.240 "r_mbytes_per_sec": 0, 00:12:13.240 "w_mbytes_per_sec": 0 00:12:13.240 }, 00:12:13.240 "claimed": false, 00:12:13.240 "zoned": false, 00:12:13.240 "supported_io_types": { 00:12:13.240 "read": true, 00:12:13.240 "write": true, 00:12:13.240 "unmap": false, 00:12:13.240 "flush": false, 00:12:13.240 "reset": true, 00:12:13.240 "nvme_admin": false, 00:12:13.240 "nvme_io": false, 00:12:13.240 "nvme_io_md": false, 00:12:13.240 "write_zeroes": true, 00:12:13.240 "zcopy": false, 00:12:13.240 "get_zone_info": false, 00:12:13.240 "zone_management": false, 00:12:13.240 "zone_append": false, 00:12:13.240 "compare": false, 00:12:13.240 "compare_and_write": false, 00:12:13.240 "abort": false, 00:12:13.240 "seek_hole": false, 00:12:13.240 "seek_data": false, 00:12:13.240 "copy": false, 00:12:13.240 "nvme_iov_md": false 00:12:13.240 }, 00:12:13.240 "memory_domains": [ 00:12:13.240 { 00:12:13.240 "dma_device_id": "system", 00:12:13.240 "dma_device_type": 1 00:12:13.240 }, 00:12:13.240 { 00:12:13.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.240 "dma_device_type": 2 00:12:13.240 }, 00:12:13.240 { 00:12:13.240 "dma_device_id": "system", 00:12:13.240 "dma_device_type": 1 00:12:13.240 }, 00:12:13.240 { 00:12:13.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.240 "dma_device_type": 2 00:12:13.240 }, 00:12:13.240 { 00:12:13.240 "dma_device_id": "system", 00:12:13.240 "dma_device_type": 1 00:12:13.240 }, 00:12:13.240 { 00:12:13.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.240 "dma_device_type": 2 00:12:13.240 } 00:12:13.240 ], 00:12:13.240 "driver_specific": { 00:12:13.240 "raid": { 00:12:13.240 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:13.240 "strip_size_kb": 0, 00:12:13.240 "state": "online", 00:12:13.240 "raid_level": "raid1", 00:12:13.240 "superblock": true, 00:12:13.240 "num_base_bdevs": 3, 00:12:13.240 "num_base_bdevs_discovered": 3, 00:12:13.240 "num_base_bdevs_operational": 3, 00:12:13.240 "base_bdevs_list": [ 00:12:13.240 { 00:12:13.240 "name": "pt1", 00:12:13.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.240 "is_configured": true, 00:12:13.240 "data_offset": 2048, 00:12:13.240 "data_size": 63488 00:12:13.240 }, 00:12:13.240 { 00:12:13.240 "name": "pt2", 00:12:13.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.240 "is_configured": true, 00:12:13.240 "data_offset": 2048, 00:12:13.240 "data_size": 63488 00:12:13.240 }, 00:12:13.240 { 00:12:13.240 "name": "pt3", 00:12:13.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.240 "is_configured": true, 00:12:13.240 "data_offset": 2048, 00:12:13.240 "data_size": 63488 00:12:13.240 } 00:12:13.240 ] 00:12:13.240 } 00:12:13.240 } 00:12:13.240 }' 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:13.240 pt2 00:12:13.240 pt3' 00:12:13.240 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.498 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.499 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.499 [2024-11-15 10:40:44.000983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1a565ea0-9139-41e0-8488-e45abe8b2dd7 '!=' 1a565ea0-9139-41e0-8488-e45abe8b2dd7 ']' 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.499 [2024-11-15 10:40:44.048789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.499 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.757 "name": "raid_bdev1", 00:12:13.757 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:13.757 "strip_size_kb": 0, 00:12:13.757 "state": "online", 00:12:13.757 "raid_level": "raid1", 00:12:13.757 "superblock": true, 00:12:13.757 "num_base_bdevs": 3, 00:12:13.757 "num_base_bdevs_discovered": 2, 00:12:13.757 "num_base_bdevs_operational": 2, 00:12:13.757 "base_bdevs_list": [ 00:12:13.757 { 00:12:13.757 "name": null, 00:12:13.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.757 "is_configured": false, 00:12:13.757 "data_offset": 0, 00:12:13.757 "data_size": 63488 00:12:13.757 }, 00:12:13.757 { 00:12:13.757 "name": "pt2", 00:12:13.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.757 "is_configured": true, 00:12:13.757 "data_offset": 2048, 00:12:13.757 "data_size": 63488 00:12:13.757 }, 00:12:13.757 { 00:12:13.757 "name": "pt3", 00:12:13.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.757 "is_configured": true, 00:12:13.757 "data_offset": 2048, 00:12:13.757 "data_size": 63488 00:12:13.757 } 00:12:13.757 ] 00:12:13.757 }' 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.757 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.016 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:14.016 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.016 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.016 [2024-11-15 10:40:44.544811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.016 [2024-11-15 10:40:44.544846] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.016 [2024-11-15 10:40:44.544941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.016 [2024-11-15 10:40:44.545020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.016 [2024-11-15 10:40:44.545042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:14.016 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.016 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.016 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.016 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.016 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:14.016 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.284 [2024-11-15 10:40:44.616771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:14.284 [2024-11-15 10:40:44.616838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.284 [2024-11-15 10:40:44.616864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:14.284 [2024-11-15 10:40:44.616880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.284 [2024-11-15 10:40:44.619556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.284 [2024-11-15 10:40:44.619610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:14.284 [2024-11-15 10:40:44.619705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:14.284 [2024-11-15 10:40:44.619771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:14.284 pt2 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.284 "name": "raid_bdev1", 00:12:14.284 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:14.284 "strip_size_kb": 0, 00:12:14.284 "state": "configuring", 00:12:14.284 "raid_level": "raid1", 00:12:14.284 "superblock": true, 00:12:14.284 "num_base_bdevs": 3, 00:12:14.284 "num_base_bdevs_discovered": 1, 00:12:14.284 "num_base_bdevs_operational": 2, 00:12:14.284 "base_bdevs_list": [ 00:12:14.284 { 00:12:14.284 "name": null, 00:12:14.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.284 "is_configured": false, 00:12:14.284 "data_offset": 2048, 00:12:14.284 "data_size": 63488 00:12:14.284 }, 00:12:14.284 { 00:12:14.284 "name": "pt2", 00:12:14.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.284 "is_configured": true, 00:12:14.284 "data_offset": 2048, 00:12:14.284 "data_size": 63488 00:12:14.284 }, 00:12:14.284 { 00:12:14.284 "name": null, 00:12:14.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.284 "is_configured": false, 00:12:14.284 "data_offset": 2048, 00:12:14.284 "data_size": 63488 00:12:14.284 } 00:12:14.284 ] 00:12:14.284 }' 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.284 10:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.859 [2024-11-15 10:40:45.145003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:14.859 [2024-11-15 10:40:45.145285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.859 [2024-11-15 10:40:45.145367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:14.859 [2024-11-15 10:40:45.145399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.859 [2024-11-15 10:40:45.146106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.859 [2024-11-15 10:40:45.146170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:14.859 [2024-11-15 10:40:45.146325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:14.859 [2024-11-15 10:40:45.146410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:14.859 [2024-11-15 10:40:45.146598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:14.859 [2024-11-15 10:40:45.146639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:14.859 [2024-11-15 10:40:45.147033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:14.859 [2024-11-15 10:40:45.147321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:14.859 [2024-11-15 10:40:45.147375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:14.859 [2024-11-15 10:40:45.147646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.859 pt3 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.859 "name": "raid_bdev1", 00:12:14.859 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:14.859 "strip_size_kb": 0, 00:12:14.859 "state": "online", 00:12:14.859 "raid_level": "raid1", 00:12:14.859 "superblock": true, 00:12:14.859 "num_base_bdevs": 3, 00:12:14.859 "num_base_bdevs_discovered": 2, 00:12:14.859 "num_base_bdevs_operational": 2, 00:12:14.859 "base_bdevs_list": [ 00:12:14.859 { 00:12:14.859 "name": null, 00:12:14.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.859 "is_configured": false, 00:12:14.859 "data_offset": 2048, 00:12:14.859 "data_size": 63488 00:12:14.859 }, 00:12:14.859 { 00:12:14.859 "name": "pt2", 00:12:14.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.859 "is_configured": true, 00:12:14.859 "data_offset": 2048, 00:12:14.859 "data_size": 63488 00:12:14.859 }, 00:12:14.859 { 00:12:14.859 "name": "pt3", 00:12:14.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.859 "is_configured": true, 00:12:14.859 "data_offset": 2048, 00:12:14.859 "data_size": 63488 00:12:14.859 } 00:12:14.859 ] 00:12:14.859 }' 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.859 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.128 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.128 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.128 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.128 [2024-11-15 10:40:45.645074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.128 [2024-11-15 10:40:45.645115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.128 [2024-11-15 10:40:45.645207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.128 [2024-11-15 10:40:45.645298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.128 [2024-11-15 10:40:45.645314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:15.128 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.128 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.128 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:15.128 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.128 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.128 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.387 [2024-11-15 10:40:45.717104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:15.387 [2024-11-15 10:40:45.717177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.387 [2024-11-15 10:40:45.717206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:15.387 [2024-11-15 10:40:45.717221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.387 [2024-11-15 10:40:45.720110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.387 [2024-11-15 10:40:45.720305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:15.387 [2024-11-15 10:40:45.720457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:15.387 [2024-11-15 10:40:45.720531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:15.387 [2024-11-15 10:40:45.720705] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:15.387 [2024-11-15 10:40:45.720724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.387 [2024-11-15 10:40:45.720748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:15.387 [2024-11-15 10:40:45.720826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:15.387 pt1 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.387 "name": "raid_bdev1", 00:12:15.387 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:15.387 "strip_size_kb": 0, 00:12:15.387 "state": "configuring", 00:12:15.387 "raid_level": "raid1", 00:12:15.387 "superblock": true, 00:12:15.387 "num_base_bdevs": 3, 00:12:15.387 "num_base_bdevs_discovered": 1, 00:12:15.387 "num_base_bdevs_operational": 2, 00:12:15.387 "base_bdevs_list": [ 00:12:15.387 { 00:12:15.387 "name": null, 00:12:15.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.387 "is_configured": false, 00:12:15.387 "data_offset": 2048, 00:12:15.387 "data_size": 63488 00:12:15.387 }, 00:12:15.387 { 00:12:15.387 "name": "pt2", 00:12:15.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.387 "is_configured": true, 00:12:15.387 "data_offset": 2048, 00:12:15.387 "data_size": 63488 00:12:15.387 }, 00:12:15.387 { 00:12:15.387 "name": null, 00:12:15.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.387 "is_configured": false, 00:12:15.387 "data_offset": 2048, 00:12:15.387 "data_size": 63488 00:12:15.387 } 00:12:15.387 ] 00:12:15.387 }' 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.387 10:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 [2024-11-15 10:40:46.289273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:15.954 [2024-11-15 10:40:46.289370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.954 [2024-11-15 10:40:46.289407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:15.954 [2024-11-15 10:40:46.289422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.954 [2024-11-15 10:40:46.289985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.954 [2024-11-15 10:40:46.290028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:15.954 [2024-11-15 10:40:46.290129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:15.954 [2024-11-15 10:40:46.290164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:15.954 [2024-11-15 10:40:46.290318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:15.954 [2024-11-15 10:40:46.290334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:15.954 [2024-11-15 10:40:46.290657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:15.954 [2024-11-15 10:40:46.290865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:15.954 [2024-11-15 10:40:46.290890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:15.954 [2024-11-15 10:40:46.291092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.954 pt3 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.954 "name": "raid_bdev1", 00:12:15.954 "uuid": "1a565ea0-9139-41e0-8488-e45abe8b2dd7", 00:12:15.954 "strip_size_kb": 0, 00:12:15.954 "state": "online", 00:12:15.954 "raid_level": "raid1", 00:12:15.954 "superblock": true, 00:12:15.954 "num_base_bdevs": 3, 00:12:15.954 "num_base_bdevs_discovered": 2, 00:12:15.954 "num_base_bdevs_operational": 2, 00:12:15.954 "base_bdevs_list": [ 00:12:15.954 { 00:12:15.954 "name": null, 00:12:15.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.954 "is_configured": false, 00:12:15.954 "data_offset": 2048, 00:12:15.954 "data_size": 63488 00:12:15.954 }, 00:12:15.954 { 00:12:15.954 "name": "pt2", 00:12:15.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.954 "is_configured": true, 00:12:15.954 "data_offset": 2048, 00:12:15.954 "data_size": 63488 00:12:15.954 }, 00:12:15.954 { 00:12:15.954 "name": "pt3", 00:12:15.954 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.954 "is_configured": true, 00:12:15.954 "data_offset": 2048, 00:12:15.954 "data_size": 63488 00:12:15.954 } 00:12:15.954 ] 00:12:15.954 }' 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.954 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:16.521 [2024-11-15 10:40:46.865730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1a565ea0-9139-41e0-8488-e45abe8b2dd7 '!=' 1a565ea0-9139-41e0-8488-e45abe8b2dd7 ']' 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68904 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68904 ']' 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68904 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68904 00:12:16.521 killing process with pid 68904 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68904' 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68904 00:12:16.521 [2024-11-15 10:40:46.939955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:16.521 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68904 00:12:16.521 [2024-11-15 10:40:46.940063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.521 [2024-11-15 10:40:46.940143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.521 [2024-11-15 10:40:46.940161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:16.779 [2024-11-15 10:40:47.193250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.716 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:17.716 00:12:17.716 real 0m8.427s 00:12:17.716 user 0m13.950s 00:12:17.716 sys 0m1.038s 00:12:17.716 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:17.716 ************************************ 00:12:17.716 END TEST raid_superblock_test 00:12:17.716 ************************************ 00:12:17.716 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.716 10:40:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:17.716 10:40:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:17.716 10:40:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:17.716 10:40:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.716 ************************************ 00:12:17.716 START TEST raid_read_error_test 00:12:17.716 ************************************ 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Fn8ReVZy76 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69361 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69361 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69361 ']' 00:12:17.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:17.716 10:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.974 [2024-11-15 10:40:48.371124] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:12:17.974 [2024-11-15 10:40:48.371313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69361 ] 00:12:18.233 [2024-11-15 10:40:48.558930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.233 [2024-11-15 10:40:48.681542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.492 [2024-11-15 10:40:48.862257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.492 [2024-11-15 10:40:48.862338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 BaseBdev1_malloc 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 true 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 [2024-11-15 10:40:49.404895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:19.060 [2024-11-15 10:40:49.405120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.060 [2024-11-15 10:40:49.405162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:19.060 [2024-11-15 10:40:49.405183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.060 [2024-11-15 10:40:49.407832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.060 [2024-11-15 10:40:49.407885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:19.060 BaseBdev1 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 BaseBdev2_malloc 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 true 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 [2024-11-15 10:40:49.456408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:19.060 [2024-11-15 10:40:49.456478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.060 [2024-11-15 10:40:49.456506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:19.060 [2024-11-15 10:40:49.456523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.060 [2024-11-15 10:40:49.459097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.060 [2024-11-15 10:40:49.459151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:19.060 BaseBdev2 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 BaseBdev3_malloc 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 true 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 [2024-11-15 10:40:49.523200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:19.060 [2024-11-15 10:40:49.523438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.060 [2024-11-15 10:40:49.523479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:19.060 [2024-11-15 10:40:49.523499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.060 [2024-11-15 10:40:49.526095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.060 [2024-11-15 10:40:49.526141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:19.060 BaseBdev3 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 [2024-11-15 10:40:49.531291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.060 [2024-11-15 10:40:49.533675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.060 [2024-11-15 10:40:49.533781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.060 [2024-11-15 10:40:49.534067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:19.060 [2024-11-15 10:40:49.534086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.060 [2024-11-15 10:40:49.534424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:19.060 [2024-11-15 10:40:49.534652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:19.060 [2024-11-15 10:40:49.534672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:19.060 [2024-11-15 10:40:49.534862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.060 "name": "raid_bdev1", 00:12:19.060 "uuid": "e577e1f9-c2eb-4328-bff8-65d16d05b755", 00:12:19.060 "strip_size_kb": 0, 00:12:19.060 "state": "online", 00:12:19.060 "raid_level": "raid1", 00:12:19.060 "superblock": true, 00:12:19.060 "num_base_bdevs": 3, 00:12:19.060 "num_base_bdevs_discovered": 3, 00:12:19.060 "num_base_bdevs_operational": 3, 00:12:19.060 "base_bdevs_list": [ 00:12:19.060 { 00:12:19.060 "name": "BaseBdev1", 00:12:19.060 "uuid": "3d455dd9-2e47-5298-b7b9-219b4275ff09", 00:12:19.061 "is_configured": true, 00:12:19.061 "data_offset": 2048, 00:12:19.061 "data_size": 63488 00:12:19.061 }, 00:12:19.061 { 00:12:19.061 "name": "BaseBdev2", 00:12:19.061 "uuid": "3bbd9256-10d1-5c20-b601-6bc560f47399", 00:12:19.061 "is_configured": true, 00:12:19.061 "data_offset": 2048, 00:12:19.061 "data_size": 63488 00:12:19.061 }, 00:12:19.061 { 00:12:19.061 "name": "BaseBdev3", 00:12:19.061 "uuid": "e97878dc-c2c6-5d84-95e8-8b895d119936", 00:12:19.061 "is_configured": true, 00:12:19.061 "data_offset": 2048, 00:12:19.061 "data_size": 63488 00:12:19.061 } 00:12:19.061 ] 00:12:19.061 }' 00:12:19.061 10:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.061 10:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.628 10:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:19.628 10:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:19.886 [2024-11-15 10:40:50.188773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.847 "name": "raid_bdev1", 00:12:20.847 "uuid": "e577e1f9-c2eb-4328-bff8-65d16d05b755", 00:12:20.847 "strip_size_kb": 0, 00:12:20.847 "state": "online", 00:12:20.847 "raid_level": "raid1", 00:12:20.847 "superblock": true, 00:12:20.847 "num_base_bdevs": 3, 00:12:20.847 "num_base_bdevs_discovered": 3, 00:12:20.847 "num_base_bdevs_operational": 3, 00:12:20.847 "base_bdevs_list": [ 00:12:20.847 { 00:12:20.847 "name": "BaseBdev1", 00:12:20.847 "uuid": "3d455dd9-2e47-5298-b7b9-219b4275ff09", 00:12:20.847 "is_configured": true, 00:12:20.847 "data_offset": 2048, 00:12:20.847 "data_size": 63488 00:12:20.847 }, 00:12:20.847 { 00:12:20.847 "name": "BaseBdev2", 00:12:20.847 "uuid": "3bbd9256-10d1-5c20-b601-6bc560f47399", 00:12:20.847 "is_configured": true, 00:12:20.847 "data_offset": 2048, 00:12:20.847 "data_size": 63488 00:12:20.847 }, 00:12:20.847 { 00:12:20.847 "name": "BaseBdev3", 00:12:20.847 "uuid": "e97878dc-c2c6-5d84-95e8-8b895d119936", 00:12:20.847 "is_configured": true, 00:12:20.847 "data_offset": 2048, 00:12:20.847 "data_size": 63488 00:12:20.847 } 00:12:20.847 ] 00:12:20.847 }' 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.847 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.105 [2024-11-15 10:40:51.616381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.105 [2024-11-15 10:40:51.616592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.105 [2024-11-15 10:40:51.620331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.105 [2024-11-15 10:40:51.620472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.105 [2024-11-15 10:40:51.620615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.105 [2024-11-15 10:40:51.620632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:21.105 { 00:12:21.105 "results": [ 00:12:21.105 { 00:12:21.105 "job": "raid_bdev1", 00:12:21.105 "core_mask": "0x1", 00:12:21.105 "workload": "randrw", 00:12:21.105 "percentage": 50, 00:12:21.105 "status": "finished", 00:12:21.105 "queue_depth": 1, 00:12:21.105 "io_size": 131072, 00:12:21.105 "runtime": 1.425663, 00:12:21.105 "iops": 9978.515259216238, 00:12:21.105 "mibps": 1247.3144074020297, 00:12:21.105 "io_failed": 0, 00:12:21.105 "io_timeout": 0, 00:12:21.105 "avg_latency_us": 95.44801298518718, 00:12:21.105 "min_latency_us": 43.75272727272727, 00:12:21.105 "max_latency_us": 1995.8690909090908 00:12:21.105 } 00:12:21.105 ], 00:12:21.105 "core_count": 1 00:12:21.105 } 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69361 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69361 ']' 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69361 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69361 00:12:21.105 killing process with pid 69361 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69361' 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69361 00:12:21.105 [2024-11-15 10:40:51.660348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:21.105 10:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69361 00:12:21.364 [2024-11-15 10:40:51.855907] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.740 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Fn8ReVZy76 00:12:22.740 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:22.740 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:22.740 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:22.740 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:22.740 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:22.740 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:22.740 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:22.740 00:12:22.740 real 0m4.690s 00:12:22.740 user 0m5.906s 00:12:22.740 sys 0m0.494s 00:12:22.740 10:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:22.740 10:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.740 ************************************ 00:12:22.740 END TEST raid_read_error_test 00:12:22.740 ************************************ 00:12:22.740 10:40:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:22.740 10:40:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:22.740 10:40:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:22.740 10:40:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.740 ************************************ 00:12:22.740 START TEST raid_write_error_test 00:12:22.740 ************************************ 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YjHiS0G6zW 00:12:22.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69501 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69501 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69501 ']' 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:22.740 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.740 [2024-11-15 10:40:53.087598] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:12:22.740 [2024-11-15 10:40:53.087751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69501 ] 00:12:22.740 [2024-11-15 10:40:53.261752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.998 [2024-11-15 10:40:53.388072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.256 [2024-11-15 10:40:53.601431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.256 [2024-11-15 10:40:53.601648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 BaseBdev1_malloc 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 true 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 [2024-11-15 10:40:54.190401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:23.823 [2024-11-15 10:40:54.190472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.823 [2024-11-15 10:40:54.190502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:23.823 [2024-11-15 10:40:54.190520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.823 [2024-11-15 10:40:54.193136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.823 [2024-11-15 10:40:54.193316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:23.823 BaseBdev1 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 BaseBdev2_malloc 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 true 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 [2024-11-15 10:40:54.245779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:23.823 [2024-11-15 10:40:54.245848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.823 [2024-11-15 10:40:54.245875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:23.823 [2024-11-15 10:40:54.245892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.823 [2024-11-15 10:40:54.248533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.823 [2024-11-15 10:40:54.248585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.823 BaseBdev2 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 BaseBdev3_malloc 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 true 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 [2024-11-15 10:40:54.323863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:23.823 [2024-11-15 10:40:54.323932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.823 [2024-11-15 10:40:54.323959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:23.823 [2024-11-15 10:40:54.323977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.823 [2024-11-15 10:40:54.326574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.823 [2024-11-15 10:40:54.326625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:23.823 BaseBdev3 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 [2024-11-15 10:40:54.331948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.823 [2024-11-15 10:40:54.334335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.823 [2024-11-15 10:40:54.334583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.823 [2024-11-15 10:40:54.334918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:23.823 [2024-11-15 10:40:54.335058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:23.823 [2024-11-15 10:40:54.335444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:23.823 [2024-11-15 10:40:54.335799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:23.823 [2024-11-15 10:40:54.335936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:23.823 [2024-11-15 10:40:54.336387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.823 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.082 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.082 "name": "raid_bdev1", 00:12:24.082 "uuid": "2547e0da-ddbe-4fb7-b048-5c6a726b2855", 00:12:24.082 "strip_size_kb": 0, 00:12:24.082 "state": "online", 00:12:24.082 "raid_level": "raid1", 00:12:24.082 "superblock": true, 00:12:24.082 "num_base_bdevs": 3, 00:12:24.082 "num_base_bdevs_discovered": 3, 00:12:24.082 "num_base_bdevs_operational": 3, 00:12:24.082 "base_bdevs_list": [ 00:12:24.082 { 00:12:24.082 "name": "BaseBdev1", 00:12:24.082 "uuid": "0b733817-23af-5e5e-bd5f-740b0e2f2fc0", 00:12:24.082 "is_configured": true, 00:12:24.082 "data_offset": 2048, 00:12:24.082 "data_size": 63488 00:12:24.082 }, 00:12:24.082 { 00:12:24.082 "name": "BaseBdev2", 00:12:24.082 "uuid": "b1a77579-41b1-51bb-9c54-0e50ba3ef5ac", 00:12:24.082 "is_configured": true, 00:12:24.082 "data_offset": 2048, 00:12:24.082 "data_size": 63488 00:12:24.082 }, 00:12:24.082 { 00:12:24.082 "name": "BaseBdev3", 00:12:24.082 "uuid": "1d38f9e0-5ee4-51d3-87cf-1746f7c518e4", 00:12:24.082 "is_configured": true, 00:12:24.082 "data_offset": 2048, 00:12:24.082 "data_size": 63488 00:12:24.082 } 00:12:24.082 ] 00:12:24.082 }' 00:12:24.082 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.082 10:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.348 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:24.348 10:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:24.609 [2024-11-15 10:40:55.001792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.546 [2024-11-15 10:40:55.856591] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:25.546 [2024-11-15 10:40:55.856653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.546 [2024-11-15 10:40:55.856907] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.546 "name": "raid_bdev1", 00:12:25.546 "uuid": "2547e0da-ddbe-4fb7-b048-5c6a726b2855", 00:12:25.546 "strip_size_kb": 0, 00:12:25.546 "state": "online", 00:12:25.546 "raid_level": "raid1", 00:12:25.546 "superblock": true, 00:12:25.546 "num_base_bdevs": 3, 00:12:25.546 "num_base_bdevs_discovered": 2, 00:12:25.546 "num_base_bdevs_operational": 2, 00:12:25.546 "base_bdevs_list": [ 00:12:25.546 { 00:12:25.546 "name": null, 00:12:25.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.546 "is_configured": false, 00:12:25.546 "data_offset": 0, 00:12:25.546 "data_size": 63488 00:12:25.546 }, 00:12:25.546 { 00:12:25.546 "name": "BaseBdev2", 00:12:25.546 "uuid": "b1a77579-41b1-51bb-9c54-0e50ba3ef5ac", 00:12:25.546 "is_configured": true, 00:12:25.546 "data_offset": 2048, 00:12:25.546 "data_size": 63488 00:12:25.546 }, 00:12:25.546 { 00:12:25.546 "name": "BaseBdev3", 00:12:25.546 "uuid": "1d38f9e0-5ee4-51d3-87cf-1746f7c518e4", 00:12:25.546 "is_configured": true, 00:12:25.546 "data_offset": 2048, 00:12:25.546 "data_size": 63488 00:12:25.546 } 00:12:25.546 ] 00:12:25.546 }' 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.546 10:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.114 [2024-11-15 10:40:56.398135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.114 [2024-11-15 10:40:56.398177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.114 [2024-11-15 10:40:56.401682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.114 [2024-11-15 10:40:56.401754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.114 [2024-11-15 10:40:56.401854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.114 [2024-11-15 10:40:56.401879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:26.114 { 00:12:26.114 "results": [ 00:12:26.114 { 00:12:26.114 "job": "raid_bdev1", 00:12:26.114 "core_mask": "0x1", 00:12:26.114 "workload": "randrw", 00:12:26.114 "percentage": 50, 00:12:26.114 "status": "finished", 00:12:26.114 "queue_depth": 1, 00:12:26.114 "io_size": 131072, 00:12:26.114 "runtime": 1.394238, 00:12:26.114 "iops": 11536.767754142405, 00:12:26.114 "mibps": 1442.0959692678007, 00:12:26.114 "io_failed": 0, 00:12:26.114 "io_timeout": 0, 00:12:26.114 "avg_latency_us": 82.02089671348234, 00:12:26.114 "min_latency_us": 42.589090909090906, 00:12:26.114 "max_latency_us": 1936.290909090909 00:12:26.114 } 00:12:26.114 ], 00:12:26.114 "core_count": 1 00:12:26.114 } 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69501 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69501 ']' 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69501 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69501 00:12:26.114 killing process with pid 69501 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69501' 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69501 00:12:26.114 [2024-11-15 10:40:56.434768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.114 10:40:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69501 00:12:26.114 [2024-11-15 10:40:56.627263] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.491 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YjHiS0G6zW 00:12:27.491 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:27.491 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:27.491 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:27.491 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:27.491 ************************************ 00:12:27.491 END TEST raid_write_error_test 00:12:27.491 ************************************ 00:12:27.491 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:27.491 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:27.491 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:27.491 00:12:27.491 real 0m4.679s 00:12:27.491 user 0m5.998s 00:12:27.491 sys 0m0.480s 00:12:27.491 10:40:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:27.491 10:40:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.491 10:40:57 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:27.491 10:40:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:27.491 10:40:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:27.491 10:40:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:27.491 10:40:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:27.491 10:40:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.491 ************************************ 00:12:27.491 START TEST raid_state_function_test 00:12:27.491 ************************************ 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:27.491 Process raid pid: 69645 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69645 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69645' 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69645 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69645 ']' 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:27.491 10:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.492 [2024-11-15 10:40:57.839945] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:12:27.492 [2024-11-15 10:40:57.840654] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.492 [2024-11-15 10:40:58.031229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.749 [2024-11-15 10:40:58.164197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.008 [2024-11-15 10:40:58.397058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.008 [2024-11-15 10:40:58.397119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.575 [2024-11-15 10:40:58.833759] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.575 [2024-11-15 10:40:58.833835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.575 [2024-11-15 10:40:58.833854] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.575 [2024-11-15 10:40:58.833871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.575 [2024-11-15 10:40:58.833882] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.575 [2024-11-15 10:40:58.833897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.575 [2024-11-15 10:40:58.833908] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:28.575 [2024-11-15 10:40:58.833922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.575 "name": "Existed_Raid", 00:12:28.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.575 "strip_size_kb": 64, 00:12:28.575 "state": "configuring", 00:12:28.575 "raid_level": "raid0", 00:12:28.575 "superblock": false, 00:12:28.575 "num_base_bdevs": 4, 00:12:28.575 "num_base_bdevs_discovered": 0, 00:12:28.575 "num_base_bdevs_operational": 4, 00:12:28.575 "base_bdevs_list": [ 00:12:28.575 { 00:12:28.575 "name": "BaseBdev1", 00:12:28.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.575 "is_configured": false, 00:12:28.575 "data_offset": 0, 00:12:28.575 "data_size": 0 00:12:28.575 }, 00:12:28.575 { 00:12:28.575 "name": "BaseBdev2", 00:12:28.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.575 "is_configured": false, 00:12:28.575 "data_offset": 0, 00:12:28.575 "data_size": 0 00:12:28.575 }, 00:12:28.575 { 00:12:28.575 "name": "BaseBdev3", 00:12:28.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.575 "is_configured": false, 00:12:28.575 "data_offset": 0, 00:12:28.575 "data_size": 0 00:12:28.575 }, 00:12:28.575 { 00:12:28.575 "name": "BaseBdev4", 00:12:28.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.575 "is_configured": false, 00:12:28.575 "data_offset": 0, 00:12:28.575 "data_size": 0 00:12:28.575 } 00:12:28.575 ] 00:12:28.575 }' 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.575 10:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.833 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:28.833 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.833 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.833 [2024-11-15 10:40:59.365840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.833 [2024-11-15 10:40:59.365890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:28.833 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.834 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:28.834 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.834 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.834 [2024-11-15 10:40:59.373836] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.834 [2024-11-15 10:40:59.373894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.834 [2024-11-15 10:40:59.373911] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.834 [2024-11-15 10:40:59.373927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.834 [2024-11-15 10:40:59.373937] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.834 [2024-11-15 10:40:59.373952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.834 [2024-11-15 10:40:59.373962] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:28.834 [2024-11-15 10:40:59.373977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:28.834 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.834 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:28.834 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.834 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.093 [2024-11-15 10:40:59.414534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.093 BaseBdev1 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.093 [ 00:12:29.093 { 00:12:29.093 "name": "BaseBdev1", 00:12:29.093 "aliases": [ 00:12:29.093 "f2e57934-54b9-48b0-bd8c-4eeedfa5c11b" 00:12:29.093 ], 00:12:29.093 "product_name": "Malloc disk", 00:12:29.093 "block_size": 512, 00:12:29.093 "num_blocks": 65536, 00:12:29.093 "uuid": "f2e57934-54b9-48b0-bd8c-4eeedfa5c11b", 00:12:29.093 "assigned_rate_limits": { 00:12:29.093 "rw_ios_per_sec": 0, 00:12:29.093 "rw_mbytes_per_sec": 0, 00:12:29.093 "r_mbytes_per_sec": 0, 00:12:29.093 "w_mbytes_per_sec": 0 00:12:29.093 }, 00:12:29.093 "claimed": true, 00:12:29.093 "claim_type": "exclusive_write", 00:12:29.093 "zoned": false, 00:12:29.093 "supported_io_types": { 00:12:29.093 "read": true, 00:12:29.093 "write": true, 00:12:29.093 "unmap": true, 00:12:29.093 "flush": true, 00:12:29.093 "reset": true, 00:12:29.093 "nvme_admin": false, 00:12:29.093 "nvme_io": false, 00:12:29.093 "nvme_io_md": false, 00:12:29.093 "write_zeroes": true, 00:12:29.093 "zcopy": true, 00:12:29.093 "get_zone_info": false, 00:12:29.093 "zone_management": false, 00:12:29.093 "zone_append": false, 00:12:29.093 "compare": false, 00:12:29.093 "compare_and_write": false, 00:12:29.093 "abort": true, 00:12:29.093 "seek_hole": false, 00:12:29.093 "seek_data": false, 00:12:29.093 "copy": true, 00:12:29.093 "nvme_iov_md": false 00:12:29.093 }, 00:12:29.093 "memory_domains": [ 00:12:29.093 { 00:12:29.093 "dma_device_id": "system", 00:12:29.093 "dma_device_type": 1 00:12:29.093 }, 00:12:29.093 { 00:12:29.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.093 "dma_device_type": 2 00:12:29.093 } 00:12:29.093 ], 00:12:29.093 "driver_specific": {} 00:12:29.093 } 00:12:29.093 ] 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.093 "name": "Existed_Raid", 00:12:29.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.093 "strip_size_kb": 64, 00:12:29.093 "state": "configuring", 00:12:29.093 "raid_level": "raid0", 00:12:29.093 "superblock": false, 00:12:29.093 "num_base_bdevs": 4, 00:12:29.093 "num_base_bdevs_discovered": 1, 00:12:29.093 "num_base_bdevs_operational": 4, 00:12:29.093 "base_bdevs_list": [ 00:12:29.093 { 00:12:29.093 "name": "BaseBdev1", 00:12:29.093 "uuid": "f2e57934-54b9-48b0-bd8c-4eeedfa5c11b", 00:12:29.093 "is_configured": true, 00:12:29.093 "data_offset": 0, 00:12:29.093 "data_size": 65536 00:12:29.093 }, 00:12:29.093 { 00:12:29.093 "name": "BaseBdev2", 00:12:29.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.093 "is_configured": false, 00:12:29.093 "data_offset": 0, 00:12:29.093 "data_size": 0 00:12:29.093 }, 00:12:29.093 { 00:12:29.093 "name": "BaseBdev3", 00:12:29.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.093 "is_configured": false, 00:12:29.093 "data_offset": 0, 00:12:29.093 "data_size": 0 00:12:29.093 }, 00:12:29.093 { 00:12:29.093 "name": "BaseBdev4", 00:12:29.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.093 "is_configured": false, 00:12:29.093 "data_offset": 0, 00:12:29.093 "data_size": 0 00:12:29.093 } 00:12:29.093 ] 00:12:29.093 }' 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.093 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.659 [2024-11-15 10:40:59.966726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.659 [2024-11-15 10:40:59.966794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.659 [2024-11-15 10:40:59.974771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.659 [2024-11-15 10:40:59.977163] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:29.659 [2024-11-15 10:40:59.977368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:29.659 [2024-11-15 10:40:59.977519] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:29.659 [2024-11-15 10:40:59.977695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:29.659 [2024-11-15 10:40:59.977853] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:29.659 [2024-11-15 10:40:59.977918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:29.659 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.660 10:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.660 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.660 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.660 "name": "Existed_Raid", 00:12:29.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.660 "strip_size_kb": 64, 00:12:29.660 "state": "configuring", 00:12:29.660 "raid_level": "raid0", 00:12:29.660 "superblock": false, 00:12:29.660 "num_base_bdevs": 4, 00:12:29.660 "num_base_bdevs_discovered": 1, 00:12:29.660 "num_base_bdevs_operational": 4, 00:12:29.660 "base_bdevs_list": [ 00:12:29.660 { 00:12:29.660 "name": "BaseBdev1", 00:12:29.660 "uuid": "f2e57934-54b9-48b0-bd8c-4eeedfa5c11b", 00:12:29.660 "is_configured": true, 00:12:29.660 "data_offset": 0, 00:12:29.660 "data_size": 65536 00:12:29.660 }, 00:12:29.660 { 00:12:29.660 "name": "BaseBdev2", 00:12:29.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.660 "is_configured": false, 00:12:29.660 "data_offset": 0, 00:12:29.660 "data_size": 0 00:12:29.660 }, 00:12:29.660 { 00:12:29.660 "name": "BaseBdev3", 00:12:29.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.660 "is_configured": false, 00:12:29.660 "data_offset": 0, 00:12:29.660 "data_size": 0 00:12:29.660 }, 00:12:29.660 { 00:12:29.660 "name": "BaseBdev4", 00:12:29.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.660 "is_configured": false, 00:12:29.660 "data_offset": 0, 00:12:29.660 "data_size": 0 00:12:29.660 } 00:12:29.660 ] 00:12:29.660 }' 00:12:29.660 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.660 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.226 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:30.226 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.226 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.226 [2024-11-15 10:41:00.540889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.226 BaseBdev2 00:12:30.226 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.227 [ 00:12:30.227 { 00:12:30.227 "name": "BaseBdev2", 00:12:30.227 "aliases": [ 00:12:30.227 "e7688008-a576-4d76-9bfc-1a96edae2ef3" 00:12:30.227 ], 00:12:30.227 "product_name": "Malloc disk", 00:12:30.227 "block_size": 512, 00:12:30.227 "num_blocks": 65536, 00:12:30.227 "uuid": "e7688008-a576-4d76-9bfc-1a96edae2ef3", 00:12:30.227 "assigned_rate_limits": { 00:12:30.227 "rw_ios_per_sec": 0, 00:12:30.227 "rw_mbytes_per_sec": 0, 00:12:30.227 "r_mbytes_per_sec": 0, 00:12:30.227 "w_mbytes_per_sec": 0 00:12:30.227 }, 00:12:30.227 "claimed": true, 00:12:30.227 "claim_type": "exclusive_write", 00:12:30.227 "zoned": false, 00:12:30.227 "supported_io_types": { 00:12:30.227 "read": true, 00:12:30.227 "write": true, 00:12:30.227 "unmap": true, 00:12:30.227 "flush": true, 00:12:30.227 "reset": true, 00:12:30.227 "nvme_admin": false, 00:12:30.227 "nvme_io": false, 00:12:30.227 "nvme_io_md": false, 00:12:30.227 "write_zeroes": true, 00:12:30.227 "zcopy": true, 00:12:30.227 "get_zone_info": false, 00:12:30.227 "zone_management": false, 00:12:30.227 "zone_append": false, 00:12:30.227 "compare": false, 00:12:30.227 "compare_and_write": false, 00:12:30.227 "abort": true, 00:12:30.227 "seek_hole": false, 00:12:30.227 "seek_data": false, 00:12:30.227 "copy": true, 00:12:30.227 "nvme_iov_md": false 00:12:30.227 }, 00:12:30.227 "memory_domains": [ 00:12:30.227 { 00:12:30.227 "dma_device_id": "system", 00:12:30.227 "dma_device_type": 1 00:12:30.227 }, 00:12:30.227 { 00:12:30.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.227 "dma_device_type": 2 00:12:30.227 } 00:12:30.227 ], 00:12:30.227 "driver_specific": {} 00:12:30.227 } 00:12:30.227 ] 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.227 "name": "Existed_Raid", 00:12:30.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.227 "strip_size_kb": 64, 00:12:30.227 "state": "configuring", 00:12:30.227 "raid_level": "raid0", 00:12:30.227 "superblock": false, 00:12:30.227 "num_base_bdevs": 4, 00:12:30.227 "num_base_bdevs_discovered": 2, 00:12:30.227 "num_base_bdevs_operational": 4, 00:12:30.227 "base_bdevs_list": [ 00:12:30.227 { 00:12:30.227 "name": "BaseBdev1", 00:12:30.227 "uuid": "f2e57934-54b9-48b0-bd8c-4eeedfa5c11b", 00:12:30.227 "is_configured": true, 00:12:30.227 "data_offset": 0, 00:12:30.227 "data_size": 65536 00:12:30.227 }, 00:12:30.227 { 00:12:30.227 "name": "BaseBdev2", 00:12:30.227 "uuid": "e7688008-a576-4d76-9bfc-1a96edae2ef3", 00:12:30.227 "is_configured": true, 00:12:30.227 "data_offset": 0, 00:12:30.227 "data_size": 65536 00:12:30.227 }, 00:12:30.227 { 00:12:30.227 "name": "BaseBdev3", 00:12:30.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.227 "is_configured": false, 00:12:30.227 "data_offset": 0, 00:12:30.227 "data_size": 0 00:12:30.227 }, 00:12:30.227 { 00:12:30.227 "name": "BaseBdev4", 00:12:30.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.227 "is_configured": false, 00:12:30.227 "data_offset": 0, 00:12:30.227 "data_size": 0 00:12:30.227 } 00:12:30.227 ] 00:12:30.227 }' 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.227 10:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.794 [2024-11-15 10:41:01.110076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.794 BaseBdev3 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.794 [ 00:12:30.794 { 00:12:30.794 "name": "BaseBdev3", 00:12:30.794 "aliases": [ 00:12:30.794 "379eb48e-cb7f-4329-9c7d-7b3355ce8c19" 00:12:30.794 ], 00:12:30.794 "product_name": "Malloc disk", 00:12:30.794 "block_size": 512, 00:12:30.794 "num_blocks": 65536, 00:12:30.794 "uuid": "379eb48e-cb7f-4329-9c7d-7b3355ce8c19", 00:12:30.794 "assigned_rate_limits": { 00:12:30.794 "rw_ios_per_sec": 0, 00:12:30.794 "rw_mbytes_per_sec": 0, 00:12:30.794 "r_mbytes_per_sec": 0, 00:12:30.794 "w_mbytes_per_sec": 0 00:12:30.794 }, 00:12:30.794 "claimed": true, 00:12:30.794 "claim_type": "exclusive_write", 00:12:30.794 "zoned": false, 00:12:30.794 "supported_io_types": { 00:12:30.794 "read": true, 00:12:30.794 "write": true, 00:12:30.794 "unmap": true, 00:12:30.794 "flush": true, 00:12:30.794 "reset": true, 00:12:30.794 "nvme_admin": false, 00:12:30.794 "nvme_io": false, 00:12:30.794 "nvme_io_md": false, 00:12:30.794 "write_zeroes": true, 00:12:30.794 "zcopy": true, 00:12:30.794 "get_zone_info": false, 00:12:30.794 "zone_management": false, 00:12:30.794 "zone_append": false, 00:12:30.794 "compare": false, 00:12:30.794 "compare_and_write": false, 00:12:30.794 "abort": true, 00:12:30.794 "seek_hole": false, 00:12:30.794 "seek_data": false, 00:12:30.794 "copy": true, 00:12:30.794 "nvme_iov_md": false 00:12:30.794 }, 00:12:30.794 "memory_domains": [ 00:12:30.794 { 00:12:30.794 "dma_device_id": "system", 00:12:30.794 "dma_device_type": 1 00:12:30.794 }, 00:12:30.794 { 00:12:30.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.794 "dma_device_type": 2 00:12:30.794 } 00:12:30.794 ], 00:12:30.794 "driver_specific": {} 00:12:30.794 } 00:12:30.794 ] 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.794 "name": "Existed_Raid", 00:12:30.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.794 "strip_size_kb": 64, 00:12:30.794 "state": "configuring", 00:12:30.794 "raid_level": "raid0", 00:12:30.794 "superblock": false, 00:12:30.794 "num_base_bdevs": 4, 00:12:30.794 "num_base_bdevs_discovered": 3, 00:12:30.794 "num_base_bdevs_operational": 4, 00:12:30.794 "base_bdevs_list": [ 00:12:30.794 { 00:12:30.794 "name": "BaseBdev1", 00:12:30.794 "uuid": "f2e57934-54b9-48b0-bd8c-4eeedfa5c11b", 00:12:30.794 "is_configured": true, 00:12:30.794 "data_offset": 0, 00:12:30.794 "data_size": 65536 00:12:30.794 }, 00:12:30.794 { 00:12:30.794 "name": "BaseBdev2", 00:12:30.794 "uuid": "e7688008-a576-4d76-9bfc-1a96edae2ef3", 00:12:30.794 "is_configured": true, 00:12:30.794 "data_offset": 0, 00:12:30.794 "data_size": 65536 00:12:30.794 }, 00:12:30.794 { 00:12:30.794 "name": "BaseBdev3", 00:12:30.794 "uuid": "379eb48e-cb7f-4329-9c7d-7b3355ce8c19", 00:12:30.794 "is_configured": true, 00:12:30.794 "data_offset": 0, 00:12:30.794 "data_size": 65536 00:12:30.794 }, 00:12:30.794 { 00:12:30.794 "name": "BaseBdev4", 00:12:30.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.794 "is_configured": false, 00:12:30.794 "data_offset": 0, 00:12:30.794 "data_size": 0 00:12:30.794 } 00:12:30.794 ] 00:12:30.794 }' 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.794 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.361 [2024-11-15 10:41:01.681114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:31.361 [2024-11-15 10:41:01.681181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:31.361 [2024-11-15 10:41:01.681196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:31.361 [2024-11-15 10:41:01.681603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:31.361 [2024-11-15 10:41:01.681849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:31.361 [2024-11-15 10:41:01.681882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:31.361 [2024-11-15 10:41:01.682275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.361 BaseBdev4 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.361 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.361 [ 00:12:31.361 { 00:12:31.361 "name": "BaseBdev4", 00:12:31.361 "aliases": [ 00:12:31.361 "57209965-85de-47c2-9283-c9c2c3b32beb" 00:12:31.361 ], 00:12:31.361 "product_name": "Malloc disk", 00:12:31.361 "block_size": 512, 00:12:31.361 "num_blocks": 65536, 00:12:31.361 "uuid": "57209965-85de-47c2-9283-c9c2c3b32beb", 00:12:31.361 "assigned_rate_limits": { 00:12:31.361 "rw_ios_per_sec": 0, 00:12:31.361 "rw_mbytes_per_sec": 0, 00:12:31.361 "r_mbytes_per_sec": 0, 00:12:31.361 "w_mbytes_per_sec": 0 00:12:31.361 }, 00:12:31.361 "claimed": true, 00:12:31.361 "claim_type": "exclusive_write", 00:12:31.361 "zoned": false, 00:12:31.361 "supported_io_types": { 00:12:31.361 "read": true, 00:12:31.361 "write": true, 00:12:31.361 "unmap": true, 00:12:31.361 "flush": true, 00:12:31.361 "reset": true, 00:12:31.361 "nvme_admin": false, 00:12:31.361 "nvme_io": false, 00:12:31.361 "nvme_io_md": false, 00:12:31.361 "write_zeroes": true, 00:12:31.361 "zcopy": true, 00:12:31.361 "get_zone_info": false, 00:12:31.361 "zone_management": false, 00:12:31.361 "zone_append": false, 00:12:31.361 "compare": false, 00:12:31.361 "compare_and_write": false, 00:12:31.361 "abort": true, 00:12:31.361 "seek_hole": false, 00:12:31.361 "seek_data": false, 00:12:31.361 "copy": true, 00:12:31.361 "nvme_iov_md": false 00:12:31.361 }, 00:12:31.361 "memory_domains": [ 00:12:31.361 { 00:12:31.361 "dma_device_id": "system", 00:12:31.361 "dma_device_type": 1 00:12:31.361 }, 00:12:31.361 { 00:12:31.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.362 "dma_device_type": 2 00:12:31.362 } 00:12:31.362 ], 00:12:31.362 "driver_specific": {} 00:12:31.362 } 00:12:31.362 ] 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.362 "name": "Existed_Raid", 00:12:31.362 "uuid": "59b9851b-fa29-418b-9c59-d772cece8e50", 00:12:31.362 "strip_size_kb": 64, 00:12:31.362 "state": "online", 00:12:31.362 "raid_level": "raid0", 00:12:31.362 "superblock": false, 00:12:31.362 "num_base_bdevs": 4, 00:12:31.362 "num_base_bdevs_discovered": 4, 00:12:31.362 "num_base_bdevs_operational": 4, 00:12:31.362 "base_bdevs_list": [ 00:12:31.362 { 00:12:31.362 "name": "BaseBdev1", 00:12:31.362 "uuid": "f2e57934-54b9-48b0-bd8c-4eeedfa5c11b", 00:12:31.362 "is_configured": true, 00:12:31.362 "data_offset": 0, 00:12:31.362 "data_size": 65536 00:12:31.362 }, 00:12:31.362 { 00:12:31.362 "name": "BaseBdev2", 00:12:31.362 "uuid": "e7688008-a576-4d76-9bfc-1a96edae2ef3", 00:12:31.362 "is_configured": true, 00:12:31.362 "data_offset": 0, 00:12:31.362 "data_size": 65536 00:12:31.362 }, 00:12:31.362 { 00:12:31.362 "name": "BaseBdev3", 00:12:31.362 "uuid": "379eb48e-cb7f-4329-9c7d-7b3355ce8c19", 00:12:31.362 "is_configured": true, 00:12:31.362 "data_offset": 0, 00:12:31.362 "data_size": 65536 00:12:31.362 }, 00:12:31.362 { 00:12:31.362 "name": "BaseBdev4", 00:12:31.362 "uuid": "57209965-85de-47c2-9283-c9c2c3b32beb", 00:12:31.362 "is_configured": true, 00:12:31.362 "data_offset": 0, 00:12:31.362 "data_size": 65536 00:12:31.362 } 00:12:31.362 ] 00:12:31.362 }' 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.362 10:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.928 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:31.928 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:31.928 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:31.928 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:31.928 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:31.928 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:31.928 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.929 [2024-11-15 10:41:02.241729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:31.929 "name": "Existed_Raid", 00:12:31.929 "aliases": [ 00:12:31.929 "59b9851b-fa29-418b-9c59-d772cece8e50" 00:12:31.929 ], 00:12:31.929 "product_name": "Raid Volume", 00:12:31.929 "block_size": 512, 00:12:31.929 "num_blocks": 262144, 00:12:31.929 "uuid": "59b9851b-fa29-418b-9c59-d772cece8e50", 00:12:31.929 "assigned_rate_limits": { 00:12:31.929 "rw_ios_per_sec": 0, 00:12:31.929 "rw_mbytes_per_sec": 0, 00:12:31.929 "r_mbytes_per_sec": 0, 00:12:31.929 "w_mbytes_per_sec": 0 00:12:31.929 }, 00:12:31.929 "claimed": false, 00:12:31.929 "zoned": false, 00:12:31.929 "supported_io_types": { 00:12:31.929 "read": true, 00:12:31.929 "write": true, 00:12:31.929 "unmap": true, 00:12:31.929 "flush": true, 00:12:31.929 "reset": true, 00:12:31.929 "nvme_admin": false, 00:12:31.929 "nvme_io": false, 00:12:31.929 "nvme_io_md": false, 00:12:31.929 "write_zeroes": true, 00:12:31.929 "zcopy": false, 00:12:31.929 "get_zone_info": false, 00:12:31.929 "zone_management": false, 00:12:31.929 "zone_append": false, 00:12:31.929 "compare": false, 00:12:31.929 "compare_and_write": false, 00:12:31.929 "abort": false, 00:12:31.929 "seek_hole": false, 00:12:31.929 "seek_data": false, 00:12:31.929 "copy": false, 00:12:31.929 "nvme_iov_md": false 00:12:31.929 }, 00:12:31.929 "memory_domains": [ 00:12:31.929 { 00:12:31.929 "dma_device_id": "system", 00:12:31.929 "dma_device_type": 1 00:12:31.929 }, 00:12:31.929 { 00:12:31.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.929 "dma_device_type": 2 00:12:31.929 }, 00:12:31.929 { 00:12:31.929 "dma_device_id": "system", 00:12:31.929 "dma_device_type": 1 00:12:31.929 }, 00:12:31.929 { 00:12:31.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.929 "dma_device_type": 2 00:12:31.929 }, 00:12:31.929 { 00:12:31.929 "dma_device_id": "system", 00:12:31.929 "dma_device_type": 1 00:12:31.929 }, 00:12:31.929 { 00:12:31.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.929 "dma_device_type": 2 00:12:31.929 }, 00:12:31.929 { 00:12:31.929 "dma_device_id": "system", 00:12:31.929 "dma_device_type": 1 00:12:31.929 }, 00:12:31.929 { 00:12:31.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.929 "dma_device_type": 2 00:12:31.929 } 00:12:31.929 ], 00:12:31.929 "driver_specific": { 00:12:31.929 "raid": { 00:12:31.929 "uuid": "59b9851b-fa29-418b-9c59-d772cece8e50", 00:12:31.929 "strip_size_kb": 64, 00:12:31.929 "state": "online", 00:12:31.929 "raid_level": "raid0", 00:12:31.929 "superblock": false, 00:12:31.929 "num_base_bdevs": 4, 00:12:31.929 "num_base_bdevs_discovered": 4, 00:12:31.929 "num_base_bdevs_operational": 4, 00:12:31.929 "base_bdevs_list": [ 00:12:31.929 { 00:12:31.929 "name": "BaseBdev1", 00:12:31.929 "uuid": "f2e57934-54b9-48b0-bd8c-4eeedfa5c11b", 00:12:31.929 "is_configured": true, 00:12:31.929 "data_offset": 0, 00:12:31.929 "data_size": 65536 00:12:31.929 }, 00:12:31.929 { 00:12:31.929 "name": "BaseBdev2", 00:12:31.929 "uuid": "e7688008-a576-4d76-9bfc-1a96edae2ef3", 00:12:31.929 "is_configured": true, 00:12:31.929 "data_offset": 0, 00:12:31.929 "data_size": 65536 00:12:31.929 }, 00:12:31.929 { 00:12:31.929 "name": "BaseBdev3", 00:12:31.929 "uuid": "379eb48e-cb7f-4329-9c7d-7b3355ce8c19", 00:12:31.929 "is_configured": true, 00:12:31.929 "data_offset": 0, 00:12:31.929 "data_size": 65536 00:12:31.929 }, 00:12:31.929 { 00:12:31.929 "name": "BaseBdev4", 00:12:31.929 "uuid": "57209965-85de-47c2-9283-c9c2c3b32beb", 00:12:31.929 "is_configured": true, 00:12:31.929 "data_offset": 0, 00:12:31.929 "data_size": 65536 00:12:31.929 } 00:12:31.929 ] 00:12:31.929 } 00:12:31.929 } 00:12:31.929 }' 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:31.929 BaseBdev2 00:12:31.929 BaseBdev3 00:12:31.929 BaseBdev4' 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.929 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.188 [2024-11-15 10:41:02.633503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:32.188 [2024-11-15 10:41:02.633545] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.188 [2024-11-15 10:41:02.633633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.188 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.447 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.447 "name": "Existed_Raid", 00:12:32.447 "uuid": "59b9851b-fa29-418b-9c59-d772cece8e50", 00:12:32.447 "strip_size_kb": 64, 00:12:32.447 "state": "offline", 00:12:32.447 "raid_level": "raid0", 00:12:32.447 "superblock": false, 00:12:32.447 "num_base_bdevs": 4, 00:12:32.447 "num_base_bdevs_discovered": 3, 00:12:32.447 "num_base_bdevs_operational": 3, 00:12:32.447 "base_bdevs_list": [ 00:12:32.447 { 00:12:32.447 "name": null, 00:12:32.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.447 "is_configured": false, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 65536 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "name": "BaseBdev2", 00:12:32.447 "uuid": "e7688008-a576-4d76-9bfc-1a96edae2ef3", 00:12:32.447 "is_configured": true, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 65536 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "name": "BaseBdev3", 00:12:32.447 "uuid": "379eb48e-cb7f-4329-9c7d-7b3355ce8c19", 00:12:32.447 "is_configured": true, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 65536 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "name": "BaseBdev4", 00:12:32.447 "uuid": "57209965-85de-47c2-9283-c9c2c3b32beb", 00:12:32.447 "is_configured": true, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 65536 00:12:32.447 } 00:12:32.447 ] 00:12:32.447 }' 00:12:32.447 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.447 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.705 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:32.705 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.705 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.705 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.705 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.705 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.705 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.964 [2024-11-15 10:41:03.298332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.964 [2024-11-15 10:41:03.434509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.964 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.223 [2024-11-15 10:41:03.570552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:33.223 [2024-11-15 10:41:03.570625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.223 BaseBdev2 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.223 [ 00:12:33.223 { 00:12:33.223 "name": "BaseBdev2", 00:12:33.223 "aliases": [ 00:12:33.223 "fdddb40f-339b-4f6d-90f4-3a496b57a1cd" 00:12:33.223 ], 00:12:33.223 "product_name": "Malloc disk", 00:12:33.223 "block_size": 512, 00:12:33.223 "num_blocks": 65536, 00:12:33.223 "uuid": "fdddb40f-339b-4f6d-90f4-3a496b57a1cd", 00:12:33.223 "assigned_rate_limits": { 00:12:33.223 "rw_ios_per_sec": 0, 00:12:33.223 "rw_mbytes_per_sec": 0, 00:12:33.223 "r_mbytes_per_sec": 0, 00:12:33.223 "w_mbytes_per_sec": 0 00:12:33.223 }, 00:12:33.223 "claimed": false, 00:12:33.223 "zoned": false, 00:12:33.223 "supported_io_types": { 00:12:33.223 "read": true, 00:12:33.223 "write": true, 00:12:33.223 "unmap": true, 00:12:33.223 "flush": true, 00:12:33.223 "reset": true, 00:12:33.223 "nvme_admin": false, 00:12:33.223 "nvme_io": false, 00:12:33.223 "nvme_io_md": false, 00:12:33.223 "write_zeroes": true, 00:12:33.223 "zcopy": true, 00:12:33.223 "get_zone_info": false, 00:12:33.223 "zone_management": false, 00:12:33.223 "zone_append": false, 00:12:33.223 "compare": false, 00:12:33.223 "compare_and_write": false, 00:12:33.223 "abort": true, 00:12:33.223 "seek_hole": false, 00:12:33.223 "seek_data": false, 00:12:33.223 "copy": true, 00:12:33.223 "nvme_iov_md": false 00:12:33.223 }, 00:12:33.223 "memory_domains": [ 00:12:33.223 { 00:12:33.223 "dma_device_id": "system", 00:12:33.223 "dma_device_type": 1 00:12:33.223 }, 00:12:33.223 { 00:12:33.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.223 "dma_device_type": 2 00:12:33.223 } 00:12:33.223 ], 00:12:33.223 "driver_specific": {} 00:12:33.223 } 00:12:33.223 ] 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.223 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.571 BaseBdev3 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.571 [ 00:12:33.571 { 00:12:33.571 "name": "BaseBdev3", 00:12:33.571 "aliases": [ 00:12:33.571 "d3cb6b72-f874-4427-b12a-f599dd022e27" 00:12:33.571 ], 00:12:33.571 "product_name": "Malloc disk", 00:12:33.571 "block_size": 512, 00:12:33.571 "num_blocks": 65536, 00:12:33.571 "uuid": "d3cb6b72-f874-4427-b12a-f599dd022e27", 00:12:33.571 "assigned_rate_limits": { 00:12:33.571 "rw_ios_per_sec": 0, 00:12:33.571 "rw_mbytes_per_sec": 0, 00:12:33.571 "r_mbytes_per_sec": 0, 00:12:33.571 "w_mbytes_per_sec": 0 00:12:33.571 }, 00:12:33.571 "claimed": false, 00:12:33.571 "zoned": false, 00:12:33.571 "supported_io_types": { 00:12:33.571 "read": true, 00:12:33.571 "write": true, 00:12:33.571 "unmap": true, 00:12:33.571 "flush": true, 00:12:33.571 "reset": true, 00:12:33.571 "nvme_admin": false, 00:12:33.571 "nvme_io": false, 00:12:33.571 "nvme_io_md": false, 00:12:33.571 "write_zeroes": true, 00:12:33.571 "zcopy": true, 00:12:33.571 "get_zone_info": false, 00:12:33.571 "zone_management": false, 00:12:33.571 "zone_append": false, 00:12:33.571 "compare": false, 00:12:33.571 "compare_and_write": false, 00:12:33.571 "abort": true, 00:12:33.571 "seek_hole": false, 00:12:33.571 "seek_data": false, 00:12:33.571 "copy": true, 00:12:33.571 "nvme_iov_md": false 00:12:33.571 }, 00:12:33.571 "memory_domains": [ 00:12:33.571 { 00:12:33.571 "dma_device_id": "system", 00:12:33.571 "dma_device_type": 1 00:12:33.571 }, 00:12:33.571 { 00:12:33.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.571 "dma_device_type": 2 00:12:33.571 } 00:12:33.571 ], 00:12:33.571 "driver_specific": {} 00:12:33.571 } 00:12:33.571 ] 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.571 BaseBdev4 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.571 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.571 [ 00:12:33.571 { 00:12:33.571 "name": "BaseBdev4", 00:12:33.571 "aliases": [ 00:12:33.571 "0535e7c1-34a1-49b8-9a82-8944242e2f0c" 00:12:33.571 ], 00:12:33.571 "product_name": "Malloc disk", 00:12:33.571 "block_size": 512, 00:12:33.571 "num_blocks": 65536, 00:12:33.571 "uuid": "0535e7c1-34a1-49b8-9a82-8944242e2f0c", 00:12:33.571 "assigned_rate_limits": { 00:12:33.571 "rw_ios_per_sec": 0, 00:12:33.571 "rw_mbytes_per_sec": 0, 00:12:33.571 "r_mbytes_per_sec": 0, 00:12:33.571 "w_mbytes_per_sec": 0 00:12:33.571 }, 00:12:33.571 "claimed": false, 00:12:33.571 "zoned": false, 00:12:33.571 "supported_io_types": { 00:12:33.571 "read": true, 00:12:33.571 "write": true, 00:12:33.571 "unmap": true, 00:12:33.572 "flush": true, 00:12:33.572 "reset": true, 00:12:33.572 "nvme_admin": false, 00:12:33.572 "nvme_io": false, 00:12:33.572 "nvme_io_md": false, 00:12:33.572 "write_zeroes": true, 00:12:33.572 "zcopy": true, 00:12:33.572 "get_zone_info": false, 00:12:33.572 "zone_management": false, 00:12:33.572 "zone_append": false, 00:12:33.572 "compare": false, 00:12:33.572 "compare_and_write": false, 00:12:33.572 "abort": true, 00:12:33.572 "seek_hole": false, 00:12:33.572 "seek_data": false, 00:12:33.572 "copy": true, 00:12:33.572 "nvme_iov_md": false 00:12:33.572 }, 00:12:33.572 "memory_domains": [ 00:12:33.572 { 00:12:33.572 "dma_device_id": "system", 00:12:33.572 "dma_device_type": 1 00:12:33.572 }, 00:12:33.572 { 00:12:33.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.572 "dma_device_type": 2 00:12:33.572 } 00:12:33.572 ], 00:12:33.572 "driver_specific": {} 00:12:33.572 } 00:12:33.572 ] 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.572 [2024-11-15 10:41:03.922592] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.572 [2024-11-15 10:41:03.922782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.572 [2024-11-15 10:41:03.922932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.572 [2024-11-15 10:41:03.925408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.572 [2024-11-15 10:41:03.925610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.572 "name": "Existed_Raid", 00:12:33.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.572 "strip_size_kb": 64, 00:12:33.572 "state": "configuring", 00:12:33.572 "raid_level": "raid0", 00:12:33.572 "superblock": false, 00:12:33.572 "num_base_bdevs": 4, 00:12:33.572 "num_base_bdevs_discovered": 3, 00:12:33.572 "num_base_bdevs_operational": 4, 00:12:33.572 "base_bdevs_list": [ 00:12:33.572 { 00:12:33.572 "name": "BaseBdev1", 00:12:33.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.572 "is_configured": false, 00:12:33.572 "data_offset": 0, 00:12:33.572 "data_size": 0 00:12:33.572 }, 00:12:33.572 { 00:12:33.572 "name": "BaseBdev2", 00:12:33.572 "uuid": "fdddb40f-339b-4f6d-90f4-3a496b57a1cd", 00:12:33.572 "is_configured": true, 00:12:33.572 "data_offset": 0, 00:12:33.572 "data_size": 65536 00:12:33.572 }, 00:12:33.572 { 00:12:33.572 "name": "BaseBdev3", 00:12:33.572 "uuid": "d3cb6b72-f874-4427-b12a-f599dd022e27", 00:12:33.572 "is_configured": true, 00:12:33.572 "data_offset": 0, 00:12:33.572 "data_size": 65536 00:12:33.572 }, 00:12:33.572 { 00:12:33.572 "name": "BaseBdev4", 00:12:33.572 "uuid": "0535e7c1-34a1-49b8-9a82-8944242e2f0c", 00:12:33.572 "is_configured": true, 00:12:33.572 "data_offset": 0, 00:12:33.572 "data_size": 65536 00:12:33.572 } 00:12:33.572 ] 00:12:33.572 }' 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.572 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.138 [2024-11-15 10:41:04.418707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.138 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.139 "name": "Existed_Raid", 00:12:34.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.139 "strip_size_kb": 64, 00:12:34.139 "state": "configuring", 00:12:34.139 "raid_level": "raid0", 00:12:34.139 "superblock": false, 00:12:34.139 "num_base_bdevs": 4, 00:12:34.139 "num_base_bdevs_discovered": 2, 00:12:34.139 "num_base_bdevs_operational": 4, 00:12:34.139 "base_bdevs_list": [ 00:12:34.139 { 00:12:34.139 "name": "BaseBdev1", 00:12:34.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.139 "is_configured": false, 00:12:34.139 "data_offset": 0, 00:12:34.139 "data_size": 0 00:12:34.139 }, 00:12:34.139 { 00:12:34.139 "name": null, 00:12:34.139 "uuid": "fdddb40f-339b-4f6d-90f4-3a496b57a1cd", 00:12:34.139 "is_configured": false, 00:12:34.139 "data_offset": 0, 00:12:34.139 "data_size": 65536 00:12:34.139 }, 00:12:34.139 { 00:12:34.139 "name": "BaseBdev3", 00:12:34.139 "uuid": "d3cb6b72-f874-4427-b12a-f599dd022e27", 00:12:34.139 "is_configured": true, 00:12:34.139 "data_offset": 0, 00:12:34.139 "data_size": 65536 00:12:34.139 }, 00:12:34.139 { 00:12:34.139 "name": "BaseBdev4", 00:12:34.139 "uuid": "0535e7c1-34a1-49b8-9a82-8944242e2f0c", 00:12:34.139 "is_configured": true, 00:12:34.139 "data_offset": 0, 00:12:34.139 "data_size": 65536 00:12:34.139 } 00:12:34.139 ] 00:12:34.139 }' 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.139 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.396 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:34.396 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.396 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.396 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.654 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.654 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:34.654 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:34.654 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.654 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.654 [2024-11-15 10:41:05.020555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.654 BaseBdev1 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.654 [ 00:12:34.654 { 00:12:34.654 "name": "BaseBdev1", 00:12:34.654 "aliases": [ 00:12:34.654 "82d95bc3-0a9e-45bc-be2d-e70d7568a111" 00:12:34.654 ], 00:12:34.654 "product_name": "Malloc disk", 00:12:34.654 "block_size": 512, 00:12:34.654 "num_blocks": 65536, 00:12:34.654 "uuid": "82d95bc3-0a9e-45bc-be2d-e70d7568a111", 00:12:34.654 "assigned_rate_limits": { 00:12:34.654 "rw_ios_per_sec": 0, 00:12:34.654 "rw_mbytes_per_sec": 0, 00:12:34.654 "r_mbytes_per_sec": 0, 00:12:34.654 "w_mbytes_per_sec": 0 00:12:34.654 }, 00:12:34.654 "claimed": true, 00:12:34.654 "claim_type": "exclusive_write", 00:12:34.654 "zoned": false, 00:12:34.654 "supported_io_types": { 00:12:34.654 "read": true, 00:12:34.654 "write": true, 00:12:34.654 "unmap": true, 00:12:34.654 "flush": true, 00:12:34.654 "reset": true, 00:12:34.654 "nvme_admin": false, 00:12:34.654 "nvme_io": false, 00:12:34.654 "nvme_io_md": false, 00:12:34.654 "write_zeroes": true, 00:12:34.654 "zcopy": true, 00:12:34.654 "get_zone_info": false, 00:12:34.654 "zone_management": false, 00:12:34.654 "zone_append": false, 00:12:34.654 "compare": false, 00:12:34.654 "compare_and_write": false, 00:12:34.654 "abort": true, 00:12:34.654 "seek_hole": false, 00:12:34.654 "seek_data": false, 00:12:34.654 "copy": true, 00:12:34.654 "nvme_iov_md": false 00:12:34.654 }, 00:12:34.654 "memory_domains": [ 00:12:34.654 { 00:12:34.654 "dma_device_id": "system", 00:12:34.654 "dma_device_type": 1 00:12:34.654 }, 00:12:34.654 { 00:12:34.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.654 "dma_device_type": 2 00:12:34.654 } 00:12:34.654 ], 00:12:34.654 "driver_specific": {} 00:12:34.654 } 00:12:34.654 ] 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.654 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.654 "name": "Existed_Raid", 00:12:34.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.654 "strip_size_kb": 64, 00:12:34.654 "state": "configuring", 00:12:34.654 "raid_level": "raid0", 00:12:34.654 "superblock": false, 00:12:34.654 "num_base_bdevs": 4, 00:12:34.654 "num_base_bdevs_discovered": 3, 00:12:34.654 "num_base_bdevs_operational": 4, 00:12:34.654 "base_bdevs_list": [ 00:12:34.654 { 00:12:34.654 "name": "BaseBdev1", 00:12:34.654 "uuid": "82d95bc3-0a9e-45bc-be2d-e70d7568a111", 00:12:34.654 "is_configured": true, 00:12:34.654 "data_offset": 0, 00:12:34.654 "data_size": 65536 00:12:34.654 }, 00:12:34.654 { 00:12:34.654 "name": null, 00:12:34.654 "uuid": "fdddb40f-339b-4f6d-90f4-3a496b57a1cd", 00:12:34.654 "is_configured": false, 00:12:34.655 "data_offset": 0, 00:12:34.655 "data_size": 65536 00:12:34.655 }, 00:12:34.655 { 00:12:34.655 "name": "BaseBdev3", 00:12:34.655 "uuid": "d3cb6b72-f874-4427-b12a-f599dd022e27", 00:12:34.655 "is_configured": true, 00:12:34.655 "data_offset": 0, 00:12:34.655 "data_size": 65536 00:12:34.655 }, 00:12:34.655 { 00:12:34.655 "name": "BaseBdev4", 00:12:34.655 "uuid": "0535e7c1-34a1-49b8-9a82-8944242e2f0c", 00:12:34.655 "is_configured": true, 00:12:34.655 "data_offset": 0, 00:12:34.655 "data_size": 65536 00:12:34.655 } 00:12:34.655 ] 00:12:34.655 }' 00:12:34.655 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.655 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.220 [2024-11-15 10:41:05.620821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.220 "name": "Existed_Raid", 00:12:35.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.220 "strip_size_kb": 64, 00:12:35.220 "state": "configuring", 00:12:35.220 "raid_level": "raid0", 00:12:35.220 "superblock": false, 00:12:35.220 "num_base_bdevs": 4, 00:12:35.220 "num_base_bdevs_discovered": 2, 00:12:35.220 "num_base_bdevs_operational": 4, 00:12:35.220 "base_bdevs_list": [ 00:12:35.220 { 00:12:35.220 "name": "BaseBdev1", 00:12:35.220 "uuid": "82d95bc3-0a9e-45bc-be2d-e70d7568a111", 00:12:35.220 "is_configured": true, 00:12:35.220 "data_offset": 0, 00:12:35.220 "data_size": 65536 00:12:35.220 }, 00:12:35.220 { 00:12:35.220 "name": null, 00:12:35.220 "uuid": "fdddb40f-339b-4f6d-90f4-3a496b57a1cd", 00:12:35.220 "is_configured": false, 00:12:35.220 "data_offset": 0, 00:12:35.220 "data_size": 65536 00:12:35.220 }, 00:12:35.220 { 00:12:35.220 "name": null, 00:12:35.220 "uuid": "d3cb6b72-f874-4427-b12a-f599dd022e27", 00:12:35.220 "is_configured": false, 00:12:35.220 "data_offset": 0, 00:12:35.220 "data_size": 65536 00:12:35.220 }, 00:12:35.220 { 00:12:35.220 "name": "BaseBdev4", 00:12:35.220 "uuid": "0535e7c1-34a1-49b8-9a82-8944242e2f0c", 00:12:35.220 "is_configured": true, 00:12:35.220 "data_offset": 0, 00:12:35.220 "data_size": 65536 00:12:35.220 } 00:12:35.220 ] 00:12:35.220 }' 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.220 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.785 [2024-11-15 10:41:06.180963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.785 "name": "Existed_Raid", 00:12:35.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.785 "strip_size_kb": 64, 00:12:35.785 "state": "configuring", 00:12:35.785 "raid_level": "raid0", 00:12:35.785 "superblock": false, 00:12:35.785 "num_base_bdevs": 4, 00:12:35.785 "num_base_bdevs_discovered": 3, 00:12:35.785 "num_base_bdevs_operational": 4, 00:12:35.785 "base_bdevs_list": [ 00:12:35.785 { 00:12:35.785 "name": "BaseBdev1", 00:12:35.785 "uuid": "82d95bc3-0a9e-45bc-be2d-e70d7568a111", 00:12:35.785 "is_configured": true, 00:12:35.785 "data_offset": 0, 00:12:35.785 "data_size": 65536 00:12:35.785 }, 00:12:35.785 { 00:12:35.785 "name": null, 00:12:35.785 "uuid": "fdddb40f-339b-4f6d-90f4-3a496b57a1cd", 00:12:35.785 "is_configured": false, 00:12:35.785 "data_offset": 0, 00:12:35.785 "data_size": 65536 00:12:35.785 }, 00:12:35.785 { 00:12:35.785 "name": "BaseBdev3", 00:12:35.785 "uuid": "d3cb6b72-f874-4427-b12a-f599dd022e27", 00:12:35.785 "is_configured": true, 00:12:35.785 "data_offset": 0, 00:12:35.785 "data_size": 65536 00:12:35.785 }, 00:12:35.785 { 00:12:35.785 "name": "BaseBdev4", 00:12:35.785 "uuid": "0535e7c1-34a1-49b8-9a82-8944242e2f0c", 00:12:35.785 "is_configured": true, 00:12:35.785 "data_offset": 0, 00:12:35.785 "data_size": 65536 00:12:35.785 } 00:12:35.785 ] 00:12:35.785 }' 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.785 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.351 [2024-11-15 10:41:06.729108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.351 "name": "Existed_Raid", 00:12:36.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.351 "strip_size_kb": 64, 00:12:36.351 "state": "configuring", 00:12:36.351 "raid_level": "raid0", 00:12:36.351 "superblock": false, 00:12:36.351 "num_base_bdevs": 4, 00:12:36.351 "num_base_bdevs_discovered": 2, 00:12:36.351 "num_base_bdevs_operational": 4, 00:12:36.351 "base_bdevs_list": [ 00:12:36.351 { 00:12:36.351 "name": null, 00:12:36.351 "uuid": "82d95bc3-0a9e-45bc-be2d-e70d7568a111", 00:12:36.351 "is_configured": false, 00:12:36.351 "data_offset": 0, 00:12:36.351 "data_size": 65536 00:12:36.351 }, 00:12:36.351 { 00:12:36.351 "name": null, 00:12:36.351 "uuid": "fdddb40f-339b-4f6d-90f4-3a496b57a1cd", 00:12:36.351 "is_configured": false, 00:12:36.351 "data_offset": 0, 00:12:36.351 "data_size": 65536 00:12:36.351 }, 00:12:36.351 { 00:12:36.351 "name": "BaseBdev3", 00:12:36.351 "uuid": "d3cb6b72-f874-4427-b12a-f599dd022e27", 00:12:36.351 "is_configured": true, 00:12:36.351 "data_offset": 0, 00:12:36.351 "data_size": 65536 00:12:36.351 }, 00:12:36.351 { 00:12:36.351 "name": "BaseBdev4", 00:12:36.351 "uuid": "0535e7c1-34a1-49b8-9a82-8944242e2f0c", 00:12:36.351 "is_configured": true, 00:12:36.351 "data_offset": 0, 00:12:36.351 "data_size": 65536 00:12:36.351 } 00:12:36.351 ] 00:12:36.351 }' 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.351 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.918 [2024-11-15 10:41:07.345098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.918 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.919 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.919 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.919 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.919 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.919 "name": "Existed_Raid", 00:12:36.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.919 "strip_size_kb": 64, 00:12:36.919 "state": "configuring", 00:12:36.919 "raid_level": "raid0", 00:12:36.919 "superblock": false, 00:12:36.919 "num_base_bdevs": 4, 00:12:36.919 "num_base_bdevs_discovered": 3, 00:12:36.919 "num_base_bdevs_operational": 4, 00:12:36.919 "base_bdevs_list": [ 00:12:36.919 { 00:12:36.919 "name": null, 00:12:36.919 "uuid": "82d95bc3-0a9e-45bc-be2d-e70d7568a111", 00:12:36.919 "is_configured": false, 00:12:36.919 "data_offset": 0, 00:12:36.919 "data_size": 65536 00:12:36.919 }, 00:12:36.919 { 00:12:36.919 "name": "BaseBdev2", 00:12:36.919 "uuid": "fdddb40f-339b-4f6d-90f4-3a496b57a1cd", 00:12:36.919 "is_configured": true, 00:12:36.919 "data_offset": 0, 00:12:36.919 "data_size": 65536 00:12:36.919 }, 00:12:36.919 { 00:12:36.919 "name": "BaseBdev3", 00:12:36.919 "uuid": "d3cb6b72-f874-4427-b12a-f599dd022e27", 00:12:36.919 "is_configured": true, 00:12:36.919 "data_offset": 0, 00:12:36.919 "data_size": 65536 00:12:36.919 }, 00:12:36.919 { 00:12:36.919 "name": "BaseBdev4", 00:12:36.919 "uuid": "0535e7c1-34a1-49b8-9a82-8944242e2f0c", 00:12:36.919 "is_configured": true, 00:12:36.919 "data_offset": 0, 00:12:36.919 "data_size": 65536 00:12:36.919 } 00:12:36.919 ] 00:12:36.919 }' 00:12:36.919 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.919 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 82d95bc3-0a9e-45bc-be2d-e70d7568a111 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.486 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.486 [2024-11-15 10:41:08.006783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:37.486 [2024-11-15 10:41:08.007036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:37.486 [2024-11-15 10:41:08.007062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:37.486 [2024-11-15 10:41:08.007431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:37.486 [2024-11-15 10:41:08.007614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:37.486 [2024-11-15 10:41:08.007636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:37.486 [2024-11-15 10:41:08.007932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.486 NewBaseBdev 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.486 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.486 [ 00:12:37.486 { 00:12:37.486 "name": "NewBaseBdev", 00:12:37.486 "aliases": [ 00:12:37.486 "82d95bc3-0a9e-45bc-be2d-e70d7568a111" 00:12:37.486 ], 00:12:37.486 "product_name": "Malloc disk", 00:12:37.486 "block_size": 512, 00:12:37.486 "num_blocks": 65536, 00:12:37.486 "uuid": "82d95bc3-0a9e-45bc-be2d-e70d7568a111", 00:12:37.486 "assigned_rate_limits": { 00:12:37.486 "rw_ios_per_sec": 0, 00:12:37.486 "rw_mbytes_per_sec": 0, 00:12:37.486 "r_mbytes_per_sec": 0, 00:12:37.486 "w_mbytes_per_sec": 0 00:12:37.486 }, 00:12:37.486 "claimed": true, 00:12:37.486 "claim_type": "exclusive_write", 00:12:37.486 "zoned": false, 00:12:37.486 "supported_io_types": { 00:12:37.486 "read": true, 00:12:37.486 "write": true, 00:12:37.486 "unmap": true, 00:12:37.486 "flush": true, 00:12:37.486 "reset": true, 00:12:37.486 "nvme_admin": false, 00:12:37.486 "nvme_io": false, 00:12:37.486 "nvme_io_md": false, 00:12:37.486 "write_zeroes": true, 00:12:37.486 "zcopy": true, 00:12:37.486 "get_zone_info": false, 00:12:37.486 "zone_management": false, 00:12:37.486 "zone_append": false, 00:12:37.486 "compare": false, 00:12:37.486 "compare_and_write": false, 00:12:37.486 "abort": true, 00:12:37.486 "seek_hole": false, 00:12:37.486 "seek_data": false, 00:12:37.486 "copy": true, 00:12:37.486 "nvme_iov_md": false 00:12:37.486 }, 00:12:37.486 "memory_domains": [ 00:12:37.486 { 00:12:37.486 "dma_device_id": "system", 00:12:37.486 "dma_device_type": 1 00:12:37.745 }, 00:12:37.745 { 00:12:37.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.745 "dma_device_type": 2 00:12:37.745 } 00:12:37.745 ], 00:12:37.745 "driver_specific": {} 00:12:37.745 } 00:12:37.745 ] 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.745 "name": "Existed_Raid", 00:12:37.745 "uuid": "958ca5c2-c1b5-422a-95a5-7f27849a47a7", 00:12:37.745 "strip_size_kb": 64, 00:12:37.745 "state": "online", 00:12:37.745 "raid_level": "raid0", 00:12:37.745 "superblock": false, 00:12:37.745 "num_base_bdevs": 4, 00:12:37.745 "num_base_bdevs_discovered": 4, 00:12:37.745 "num_base_bdevs_operational": 4, 00:12:37.745 "base_bdevs_list": [ 00:12:37.745 { 00:12:37.745 "name": "NewBaseBdev", 00:12:37.745 "uuid": "82d95bc3-0a9e-45bc-be2d-e70d7568a111", 00:12:37.745 "is_configured": true, 00:12:37.745 "data_offset": 0, 00:12:37.745 "data_size": 65536 00:12:37.745 }, 00:12:37.745 { 00:12:37.745 "name": "BaseBdev2", 00:12:37.745 "uuid": "fdddb40f-339b-4f6d-90f4-3a496b57a1cd", 00:12:37.745 "is_configured": true, 00:12:37.745 "data_offset": 0, 00:12:37.745 "data_size": 65536 00:12:37.745 }, 00:12:37.745 { 00:12:37.745 "name": "BaseBdev3", 00:12:37.745 "uuid": "d3cb6b72-f874-4427-b12a-f599dd022e27", 00:12:37.745 "is_configured": true, 00:12:37.745 "data_offset": 0, 00:12:37.745 "data_size": 65536 00:12:37.745 }, 00:12:37.745 { 00:12:37.745 "name": "BaseBdev4", 00:12:37.745 "uuid": "0535e7c1-34a1-49b8-9a82-8944242e2f0c", 00:12:37.745 "is_configured": true, 00:12:37.745 "data_offset": 0, 00:12:37.745 "data_size": 65536 00:12:37.745 } 00:12:37.745 ] 00:12:37.745 }' 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.745 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.004 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:38.004 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:38.004 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:38.004 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:38.004 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:38.004 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:38.004 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:38.004 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:38.004 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.004 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.004 [2024-11-15 10:41:08.543435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.262 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.262 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:38.262 "name": "Existed_Raid", 00:12:38.262 "aliases": [ 00:12:38.262 "958ca5c2-c1b5-422a-95a5-7f27849a47a7" 00:12:38.262 ], 00:12:38.262 "product_name": "Raid Volume", 00:12:38.262 "block_size": 512, 00:12:38.262 "num_blocks": 262144, 00:12:38.262 "uuid": "958ca5c2-c1b5-422a-95a5-7f27849a47a7", 00:12:38.262 "assigned_rate_limits": { 00:12:38.262 "rw_ios_per_sec": 0, 00:12:38.262 "rw_mbytes_per_sec": 0, 00:12:38.262 "r_mbytes_per_sec": 0, 00:12:38.262 "w_mbytes_per_sec": 0 00:12:38.262 }, 00:12:38.262 "claimed": false, 00:12:38.262 "zoned": false, 00:12:38.262 "supported_io_types": { 00:12:38.262 "read": true, 00:12:38.262 "write": true, 00:12:38.262 "unmap": true, 00:12:38.262 "flush": true, 00:12:38.262 "reset": true, 00:12:38.262 "nvme_admin": false, 00:12:38.262 "nvme_io": false, 00:12:38.262 "nvme_io_md": false, 00:12:38.262 "write_zeroes": true, 00:12:38.262 "zcopy": false, 00:12:38.262 "get_zone_info": false, 00:12:38.262 "zone_management": false, 00:12:38.262 "zone_append": false, 00:12:38.262 "compare": false, 00:12:38.262 "compare_and_write": false, 00:12:38.262 "abort": false, 00:12:38.262 "seek_hole": false, 00:12:38.262 "seek_data": false, 00:12:38.262 "copy": false, 00:12:38.262 "nvme_iov_md": false 00:12:38.263 }, 00:12:38.263 "memory_domains": [ 00:12:38.263 { 00:12:38.263 "dma_device_id": "system", 00:12:38.263 "dma_device_type": 1 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.263 "dma_device_type": 2 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "dma_device_id": "system", 00:12:38.263 "dma_device_type": 1 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.263 "dma_device_type": 2 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "dma_device_id": "system", 00:12:38.263 "dma_device_type": 1 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.263 "dma_device_type": 2 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "dma_device_id": "system", 00:12:38.263 "dma_device_type": 1 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.263 "dma_device_type": 2 00:12:38.263 } 00:12:38.263 ], 00:12:38.263 "driver_specific": { 00:12:38.263 "raid": { 00:12:38.263 "uuid": "958ca5c2-c1b5-422a-95a5-7f27849a47a7", 00:12:38.263 "strip_size_kb": 64, 00:12:38.263 "state": "online", 00:12:38.263 "raid_level": "raid0", 00:12:38.263 "superblock": false, 00:12:38.263 "num_base_bdevs": 4, 00:12:38.263 "num_base_bdevs_discovered": 4, 00:12:38.263 "num_base_bdevs_operational": 4, 00:12:38.263 "base_bdevs_list": [ 00:12:38.263 { 00:12:38.263 "name": "NewBaseBdev", 00:12:38.263 "uuid": "82d95bc3-0a9e-45bc-be2d-e70d7568a111", 00:12:38.263 "is_configured": true, 00:12:38.263 "data_offset": 0, 00:12:38.263 "data_size": 65536 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "name": "BaseBdev2", 00:12:38.263 "uuid": "fdddb40f-339b-4f6d-90f4-3a496b57a1cd", 00:12:38.263 "is_configured": true, 00:12:38.263 "data_offset": 0, 00:12:38.263 "data_size": 65536 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "name": "BaseBdev3", 00:12:38.263 "uuid": "d3cb6b72-f874-4427-b12a-f599dd022e27", 00:12:38.263 "is_configured": true, 00:12:38.263 "data_offset": 0, 00:12:38.263 "data_size": 65536 00:12:38.263 }, 00:12:38.263 { 00:12:38.263 "name": "BaseBdev4", 00:12:38.263 "uuid": "0535e7c1-34a1-49b8-9a82-8944242e2f0c", 00:12:38.263 "is_configured": true, 00:12:38.263 "data_offset": 0, 00:12:38.263 "data_size": 65536 00:12:38.263 } 00:12:38.263 ] 00:12:38.263 } 00:12:38.263 } 00:12:38.263 }' 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:38.263 BaseBdev2 00:12:38.263 BaseBdev3 00:12:38.263 BaseBdev4' 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.263 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.522 [2024-11-15 10:41:08.919112] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.522 [2024-11-15 10:41:08.919156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.522 [2024-11-15 10:41:08.919256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.522 [2024-11-15 10:41:08.919365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.522 [2024-11-15 10:41:08.919386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69645 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69645 ']' 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69645 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69645 00:12:38.522 killing process with pid 69645 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69645' 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69645 00:12:38.522 [2024-11-15 10:41:08.955913] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.522 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69645 00:12:38.781 [2024-11-15 10:41:09.294874] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:40.158 00:12:40.158 real 0m12.581s 00:12:40.158 user 0m21.044s 00:12:40.158 sys 0m1.646s 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.158 ************************************ 00:12:40.158 END TEST raid_state_function_test 00:12:40.158 ************************************ 00:12:40.158 10:41:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:40.158 10:41:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:40.158 10:41:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.158 10:41:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.158 ************************************ 00:12:40.158 START TEST raid_state_function_test_sb 00:12:40.158 ************************************ 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70330 00:12:40.158 Process raid pid: 70330 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70330' 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70330 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70330 ']' 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:40.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:40.158 10:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.158 [2024-11-15 10:41:10.477702] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:12:40.158 [2024-11-15 10:41:10.478613] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.158 [2024-11-15 10:41:10.654511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.435 [2024-11-15 10:41:10.758502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.435 [2024-11-15 10:41:10.945644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.435 [2024-11-15 10:41:10.945707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.028 [2024-11-15 10:41:11.404792] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.028 [2024-11-15 10:41:11.404862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.028 [2024-11-15 10:41:11.404885] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.028 [2024-11-15 10:41:11.404913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.028 [2024-11-15 10:41:11.404923] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:41.028 [2024-11-15 10:41:11.404938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:41.028 [2024-11-15 10:41:11.404947] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:41.028 [2024-11-15 10:41:11.404961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.028 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.028 "name": "Existed_Raid", 00:12:41.028 "uuid": "cedb578b-c311-42d1-b6ec-1a1d9b51bff3", 00:12:41.028 "strip_size_kb": 64, 00:12:41.028 "state": "configuring", 00:12:41.028 "raid_level": "raid0", 00:12:41.028 "superblock": true, 00:12:41.028 "num_base_bdevs": 4, 00:12:41.028 "num_base_bdevs_discovered": 0, 00:12:41.028 "num_base_bdevs_operational": 4, 00:12:41.028 "base_bdevs_list": [ 00:12:41.028 { 00:12:41.028 "name": "BaseBdev1", 00:12:41.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.028 "is_configured": false, 00:12:41.028 "data_offset": 0, 00:12:41.028 "data_size": 0 00:12:41.028 }, 00:12:41.028 { 00:12:41.029 "name": "BaseBdev2", 00:12:41.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.029 "is_configured": false, 00:12:41.029 "data_offset": 0, 00:12:41.029 "data_size": 0 00:12:41.029 }, 00:12:41.029 { 00:12:41.029 "name": "BaseBdev3", 00:12:41.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.029 "is_configured": false, 00:12:41.029 "data_offset": 0, 00:12:41.029 "data_size": 0 00:12:41.029 }, 00:12:41.029 { 00:12:41.029 "name": "BaseBdev4", 00:12:41.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.029 "is_configured": false, 00:12:41.029 "data_offset": 0, 00:12:41.029 "data_size": 0 00:12:41.029 } 00:12:41.029 ] 00:12:41.029 }' 00:12:41.029 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.029 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.600 [2024-11-15 10:41:11.948874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:41.600 [2024-11-15 10:41:11.948926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.600 [2024-11-15 10:41:11.956859] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.600 [2024-11-15 10:41:11.956915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.600 [2024-11-15 10:41:11.956931] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.600 [2024-11-15 10:41:11.956947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.600 [2024-11-15 10:41:11.956957] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:41.600 [2024-11-15 10:41:11.956972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:41.600 [2024-11-15 10:41:11.956981] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:41.600 [2024-11-15 10:41:11.956996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.600 [2024-11-15 10:41:11.997406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.600 BaseBdev1 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:41.600 10:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.600 [ 00:12:41.600 { 00:12:41.600 "name": "BaseBdev1", 00:12:41.600 "aliases": [ 00:12:41.600 "4a1dba2c-3696-4329-a308-056eacb4b693" 00:12:41.600 ], 00:12:41.600 "product_name": "Malloc disk", 00:12:41.600 "block_size": 512, 00:12:41.600 "num_blocks": 65536, 00:12:41.600 "uuid": "4a1dba2c-3696-4329-a308-056eacb4b693", 00:12:41.600 "assigned_rate_limits": { 00:12:41.600 "rw_ios_per_sec": 0, 00:12:41.600 "rw_mbytes_per_sec": 0, 00:12:41.600 "r_mbytes_per_sec": 0, 00:12:41.600 "w_mbytes_per_sec": 0 00:12:41.600 }, 00:12:41.600 "claimed": true, 00:12:41.600 "claim_type": "exclusive_write", 00:12:41.600 "zoned": false, 00:12:41.600 "supported_io_types": { 00:12:41.600 "read": true, 00:12:41.600 "write": true, 00:12:41.600 "unmap": true, 00:12:41.600 "flush": true, 00:12:41.600 "reset": true, 00:12:41.600 "nvme_admin": false, 00:12:41.600 "nvme_io": false, 00:12:41.600 "nvme_io_md": false, 00:12:41.600 "write_zeroes": true, 00:12:41.600 "zcopy": true, 00:12:41.600 "get_zone_info": false, 00:12:41.600 "zone_management": false, 00:12:41.600 "zone_append": false, 00:12:41.600 "compare": false, 00:12:41.600 "compare_and_write": false, 00:12:41.600 "abort": true, 00:12:41.600 "seek_hole": false, 00:12:41.600 "seek_data": false, 00:12:41.600 "copy": true, 00:12:41.600 "nvme_iov_md": false 00:12:41.600 }, 00:12:41.600 "memory_domains": [ 00:12:41.600 { 00:12:41.600 "dma_device_id": "system", 00:12:41.600 "dma_device_type": 1 00:12:41.600 }, 00:12:41.600 { 00:12:41.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.600 "dma_device_type": 2 00:12:41.600 } 00:12:41.600 ], 00:12:41.600 "driver_specific": {} 00:12:41.600 } 00:12:41.600 ] 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.600 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.601 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.601 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.601 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.601 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.601 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.601 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.601 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.601 "name": "Existed_Raid", 00:12:41.601 "uuid": "4dc19456-2d29-40ba-90a1-b37f8017e381", 00:12:41.601 "strip_size_kb": 64, 00:12:41.601 "state": "configuring", 00:12:41.601 "raid_level": "raid0", 00:12:41.601 "superblock": true, 00:12:41.601 "num_base_bdevs": 4, 00:12:41.601 "num_base_bdevs_discovered": 1, 00:12:41.601 "num_base_bdevs_operational": 4, 00:12:41.601 "base_bdevs_list": [ 00:12:41.601 { 00:12:41.601 "name": "BaseBdev1", 00:12:41.601 "uuid": "4a1dba2c-3696-4329-a308-056eacb4b693", 00:12:41.601 "is_configured": true, 00:12:41.601 "data_offset": 2048, 00:12:41.601 "data_size": 63488 00:12:41.601 }, 00:12:41.601 { 00:12:41.601 "name": "BaseBdev2", 00:12:41.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.601 "is_configured": false, 00:12:41.601 "data_offset": 0, 00:12:41.601 "data_size": 0 00:12:41.601 }, 00:12:41.601 { 00:12:41.601 "name": "BaseBdev3", 00:12:41.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.601 "is_configured": false, 00:12:41.601 "data_offset": 0, 00:12:41.601 "data_size": 0 00:12:41.601 }, 00:12:41.601 { 00:12:41.601 "name": "BaseBdev4", 00:12:41.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.601 "is_configured": false, 00:12:41.601 "data_offset": 0, 00:12:41.601 "data_size": 0 00:12:41.601 } 00:12:41.601 ] 00:12:41.601 }' 00:12:41.601 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.601 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.168 [2024-11-15 10:41:12.565621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.168 [2024-11-15 10:41:12.565688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.168 [2024-11-15 10:41:12.573679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.168 [2024-11-15 10:41:12.575925] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.168 [2024-11-15 10:41:12.575982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.168 [2024-11-15 10:41:12.575999] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.168 [2024-11-15 10:41:12.576017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.168 [2024-11-15 10:41:12.576028] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:42.168 [2024-11-15 10:41:12.576042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.168 "name": "Existed_Raid", 00:12:42.168 "uuid": "e73aa3bb-9a48-4020-bbdd-53e75a391c21", 00:12:42.168 "strip_size_kb": 64, 00:12:42.168 "state": "configuring", 00:12:42.168 "raid_level": "raid0", 00:12:42.168 "superblock": true, 00:12:42.168 "num_base_bdevs": 4, 00:12:42.168 "num_base_bdevs_discovered": 1, 00:12:42.168 "num_base_bdevs_operational": 4, 00:12:42.168 "base_bdevs_list": [ 00:12:42.168 { 00:12:42.168 "name": "BaseBdev1", 00:12:42.168 "uuid": "4a1dba2c-3696-4329-a308-056eacb4b693", 00:12:42.168 "is_configured": true, 00:12:42.168 "data_offset": 2048, 00:12:42.168 "data_size": 63488 00:12:42.168 }, 00:12:42.168 { 00:12:42.168 "name": "BaseBdev2", 00:12:42.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.168 "is_configured": false, 00:12:42.168 "data_offset": 0, 00:12:42.168 "data_size": 0 00:12:42.168 }, 00:12:42.168 { 00:12:42.168 "name": "BaseBdev3", 00:12:42.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.168 "is_configured": false, 00:12:42.168 "data_offset": 0, 00:12:42.168 "data_size": 0 00:12:42.168 }, 00:12:42.168 { 00:12:42.168 "name": "BaseBdev4", 00:12:42.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.168 "is_configured": false, 00:12:42.168 "data_offset": 0, 00:12:42.168 "data_size": 0 00:12:42.168 } 00:12:42.168 ] 00:12:42.168 }' 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.168 10:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.735 [2024-11-15 10:41:13.171778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.735 BaseBdev2 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:42.735 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.736 [ 00:12:42.736 { 00:12:42.736 "name": "BaseBdev2", 00:12:42.736 "aliases": [ 00:12:42.736 "bcf925a1-9da4-4ee3-8b75-78a0da444464" 00:12:42.736 ], 00:12:42.736 "product_name": "Malloc disk", 00:12:42.736 "block_size": 512, 00:12:42.736 "num_blocks": 65536, 00:12:42.736 "uuid": "bcf925a1-9da4-4ee3-8b75-78a0da444464", 00:12:42.736 "assigned_rate_limits": { 00:12:42.736 "rw_ios_per_sec": 0, 00:12:42.736 "rw_mbytes_per_sec": 0, 00:12:42.736 "r_mbytes_per_sec": 0, 00:12:42.736 "w_mbytes_per_sec": 0 00:12:42.736 }, 00:12:42.736 "claimed": true, 00:12:42.736 "claim_type": "exclusive_write", 00:12:42.736 "zoned": false, 00:12:42.736 "supported_io_types": { 00:12:42.736 "read": true, 00:12:42.736 "write": true, 00:12:42.736 "unmap": true, 00:12:42.736 "flush": true, 00:12:42.736 "reset": true, 00:12:42.736 "nvme_admin": false, 00:12:42.736 "nvme_io": false, 00:12:42.736 "nvme_io_md": false, 00:12:42.736 "write_zeroes": true, 00:12:42.736 "zcopy": true, 00:12:42.736 "get_zone_info": false, 00:12:42.736 "zone_management": false, 00:12:42.736 "zone_append": false, 00:12:42.736 "compare": false, 00:12:42.736 "compare_and_write": false, 00:12:42.736 "abort": true, 00:12:42.736 "seek_hole": false, 00:12:42.736 "seek_data": false, 00:12:42.736 "copy": true, 00:12:42.736 "nvme_iov_md": false 00:12:42.736 }, 00:12:42.736 "memory_domains": [ 00:12:42.736 { 00:12:42.736 "dma_device_id": "system", 00:12:42.736 "dma_device_type": 1 00:12:42.736 }, 00:12:42.736 { 00:12:42.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.736 "dma_device_type": 2 00:12:42.736 } 00:12:42.736 ], 00:12:42.736 "driver_specific": {} 00:12:42.736 } 00:12:42.736 ] 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.736 "name": "Existed_Raid", 00:12:42.736 "uuid": "e73aa3bb-9a48-4020-bbdd-53e75a391c21", 00:12:42.736 "strip_size_kb": 64, 00:12:42.736 "state": "configuring", 00:12:42.736 "raid_level": "raid0", 00:12:42.736 "superblock": true, 00:12:42.736 "num_base_bdevs": 4, 00:12:42.736 "num_base_bdevs_discovered": 2, 00:12:42.736 "num_base_bdevs_operational": 4, 00:12:42.736 "base_bdevs_list": [ 00:12:42.736 { 00:12:42.736 "name": "BaseBdev1", 00:12:42.736 "uuid": "4a1dba2c-3696-4329-a308-056eacb4b693", 00:12:42.736 "is_configured": true, 00:12:42.736 "data_offset": 2048, 00:12:42.736 "data_size": 63488 00:12:42.736 }, 00:12:42.736 { 00:12:42.736 "name": "BaseBdev2", 00:12:42.736 "uuid": "bcf925a1-9da4-4ee3-8b75-78a0da444464", 00:12:42.736 "is_configured": true, 00:12:42.736 "data_offset": 2048, 00:12:42.736 "data_size": 63488 00:12:42.736 }, 00:12:42.736 { 00:12:42.736 "name": "BaseBdev3", 00:12:42.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.736 "is_configured": false, 00:12:42.736 "data_offset": 0, 00:12:42.736 "data_size": 0 00:12:42.736 }, 00:12:42.736 { 00:12:42.736 "name": "BaseBdev4", 00:12:42.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.736 "is_configured": false, 00:12:42.736 "data_offset": 0, 00:12:42.736 "data_size": 0 00:12:42.736 } 00:12:42.736 ] 00:12:42.736 }' 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.736 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.304 [2024-11-15 10:41:13.756964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.304 BaseBdev3 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.304 [ 00:12:43.304 { 00:12:43.304 "name": "BaseBdev3", 00:12:43.304 "aliases": [ 00:12:43.304 "d928e640-e090-42f2-abf7-8c04291f2c0e" 00:12:43.304 ], 00:12:43.304 "product_name": "Malloc disk", 00:12:43.304 "block_size": 512, 00:12:43.304 "num_blocks": 65536, 00:12:43.304 "uuid": "d928e640-e090-42f2-abf7-8c04291f2c0e", 00:12:43.304 "assigned_rate_limits": { 00:12:43.304 "rw_ios_per_sec": 0, 00:12:43.304 "rw_mbytes_per_sec": 0, 00:12:43.304 "r_mbytes_per_sec": 0, 00:12:43.304 "w_mbytes_per_sec": 0 00:12:43.304 }, 00:12:43.304 "claimed": true, 00:12:43.304 "claim_type": "exclusive_write", 00:12:43.304 "zoned": false, 00:12:43.304 "supported_io_types": { 00:12:43.304 "read": true, 00:12:43.304 "write": true, 00:12:43.304 "unmap": true, 00:12:43.304 "flush": true, 00:12:43.304 "reset": true, 00:12:43.304 "nvme_admin": false, 00:12:43.304 "nvme_io": false, 00:12:43.304 "nvme_io_md": false, 00:12:43.304 "write_zeroes": true, 00:12:43.304 "zcopy": true, 00:12:43.304 "get_zone_info": false, 00:12:43.304 "zone_management": false, 00:12:43.304 "zone_append": false, 00:12:43.304 "compare": false, 00:12:43.304 "compare_and_write": false, 00:12:43.304 "abort": true, 00:12:43.304 "seek_hole": false, 00:12:43.304 "seek_data": false, 00:12:43.304 "copy": true, 00:12:43.304 "nvme_iov_md": false 00:12:43.304 }, 00:12:43.304 "memory_domains": [ 00:12:43.304 { 00:12:43.304 "dma_device_id": "system", 00:12:43.304 "dma_device_type": 1 00:12:43.304 }, 00:12:43.304 { 00:12:43.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.304 "dma_device_type": 2 00:12:43.304 } 00:12:43.304 ], 00:12:43.304 "driver_specific": {} 00:12:43.304 } 00:12:43.304 ] 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.304 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.305 "name": "Existed_Raid", 00:12:43.305 "uuid": "e73aa3bb-9a48-4020-bbdd-53e75a391c21", 00:12:43.305 "strip_size_kb": 64, 00:12:43.305 "state": "configuring", 00:12:43.305 "raid_level": "raid0", 00:12:43.305 "superblock": true, 00:12:43.305 "num_base_bdevs": 4, 00:12:43.305 "num_base_bdevs_discovered": 3, 00:12:43.305 "num_base_bdevs_operational": 4, 00:12:43.305 "base_bdevs_list": [ 00:12:43.305 { 00:12:43.305 "name": "BaseBdev1", 00:12:43.305 "uuid": "4a1dba2c-3696-4329-a308-056eacb4b693", 00:12:43.305 "is_configured": true, 00:12:43.305 "data_offset": 2048, 00:12:43.305 "data_size": 63488 00:12:43.305 }, 00:12:43.305 { 00:12:43.305 "name": "BaseBdev2", 00:12:43.305 "uuid": "bcf925a1-9da4-4ee3-8b75-78a0da444464", 00:12:43.305 "is_configured": true, 00:12:43.305 "data_offset": 2048, 00:12:43.305 "data_size": 63488 00:12:43.305 }, 00:12:43.305 { 00:12:43.305 "name": "BaseBdev3", 00:12:43.305 "uuid": "d928e640-e090-42f2-abf7-8c04291f2c0e", 00:12:43.305 "is_configured": true, 00:12:43.305 "data_offset": 2048, 00:12:43.305 "data_size": 63488 00:12:43.305 }, 00:12:43.305 { 00:12:43.305 "name": "BaseBdev4", 00:12:43.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.305 "is_configured": false, 00:12:43.305 "data_offset": 0, 00:12:43.305 "data_size": 0 00:12:43.305 } 00:12:43.305 ] 00:12:43.305 }' 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.305 10:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.872 [2024-11-15 10:41:14.323408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:43.872 [2024-11-15 10:41:14.323755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:43.872 [2024-11-15 10:41:14.323790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:43.872 [2024-11-15 10:41:14.324127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:43.872 BaseBdev4 00:12:43.872 [2024-11-15 10:41:14.324373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:43.872 [2024-11-15 10:41:14.324407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:43.872 [2024-11-15 10:41:14.324586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.872 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.872 [ 00:12:43.872 { 00:12:43.872 "name": "BaseBdev4", 00:12:43.872 "aliases": [ 00:12:43.872 "186a2d7c-8064-481e-8ca2-2092a5b7df4f" 00:12:43.872 ], 00:12:43.872 "product_name": "Malloc disk", 00:12:43.872 "block_size": 512, 00:12:43.872 "num_blocks": 65536, 00:12:43.872 "uuid": "186a2d7c-8064-481e-8ca2-2092a5b7df4f", 00:12:43.872 "assigned_rate_limits": { 00:12:43.872 "rw_ios_per_sec": 0, 00:12:43.872 "rw_mbytes_per_sec": 0, 00:12:43.872 "r_mbytes_per_sec": 0, 00:12:43.872 "w_mbytes_per_sec": 0 00:12:43.872 }, 00:12:43.873 "claimed": true, 00:12:43.873 "claim_type": "exclusive_write", 00:12:43.873 "zoned": false, 00:12:43.873 "supported_io_types": { 00:12:43.873 "read": true, 00:12:43.873 "write": true, 00:12:43.873 "unmap": true, 00:12:43.873 "flush": true, 00:12:43.873 "reset": true, 00:12:43.873 "nvme_admin": false, 00:12:43.873 "nvme_io": false, 00:12:43.873 "nvme_io_md": false, 00:12:43.873 "write_zeroes": true, 00:12:43.873 "zcopy": true, 00:12:43.873 "get_zone_info": false, 00:12:43.873 "zone_management": false, 00:12:43.873 "zone_append": false, 00:12:43.873 "compare": false, 00:12:43.873 "compare_and_write": false, 00:12:43.873 "abort": true, 00:12:43.873 "seek_hole": false, 00:12:43.873 "seek_data": false, 00:12:43.873 "copy": true, 00:12:43.873 "nvme_iov_md": false 00:12:43.873 }, 00:12:43.873 "memory_domains": [ 00:12:43.873 { 00:12:43.873 "dma_device_id": "system", 00:12:43.873 "dma_device_type": 1 00:12:43.873 }, 00:12:43.873 { 00:12:43.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.873 "dma_device_type": 2 00:12:43.873 } 00:12:43.873 ], 00:12:43.873 "driver_specific": {} 00:12:43.873 } 00:12:43.873 ] 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.873 "name": "Existed_Raid", 00:12:43.873 "uuid": "e73aa3bb-9a48-4020-bbdd-53e75a391c21", 00:12:43.873 "strip_size_kb": 64, 00:12:43.873 "state": "online", 00:12:43.873 "raid_level": "raid0", 00:12:43.873 "superblock": true, 00:12:43.873 "num_base_bdevs": 4, 00:12:43.873 "num_base_bdevs_discovered": 4, 00:12:43.873 "num_base_bdevs_operational": 4, 00:12:43.873 "base_bdevs_list": [ 00:12:43.873 { 00:12:43.873 "name": "BaseBdev1", 00:12:43.873 "uuid": "4a1dba2c-3696-4329-a308-056eacb4b693", 00:12:43.873 "is_configured": true, 00:12:43.873 "data_offset": 2048, 00:12:43.873 "data_size": 63488 00:12:43.873 }, 00:12:43.873 { 00:12:43.873 "name": "BaseBdev2", 00:12:43.873 "uuid": "bcf925a1-9da4-4ee3-8b75-78a0da444464", 00:12:43.873 "is_configured": true, 00:12:43.873 "data_offset": 2048, 00:12:43.873 "data_size": 63488 00:12:43.873 }, 00:12:43.873 { 00:12:43.873 "name": "BaseBdev3", 00:12:43.873 "uuid": "d928e640-e090-42f2-abf7-8c04291f2c0e", 00:12:43.873 "is_configured": true, 00:12:43.873 "data_offset": 2048, 00:12:43.873 "data_size": 63488 00:12:43.873 }, 00:12:43.873 { 00:12:43.873 "name": "BaseBdev4", 00:12:43.873 "uuid": "186a2d7c-8064-481e-8ca2-2092a5b7df4f", 00:12:43.873 "is_configured": true, 00:12:43.873 "data_offset": 2048, 00:12:43.873 "data_size": 63488 00:12:43.873 } 00:12:43.873 ] 00:12:43.873 }' 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.873 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.440 [2024-11-15 10:41:14.872034] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.440 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:44.440 "name": "Existed_Raid", 00:12:44.440 "aliases": [ 00:12:44.440 "e73aa3bb-9a48-4020-bbdd-53e75a391c21" 00:12:44.440 ], 00:12:44.440 "product_name": "Raid Volume", 00:12:44.440 "block_size": 512, 00:12:44.440 "num_blocks": 253952, 00:12:44.440 "uuid": "e73aa3bb-9a48-4020-bbdd-53e75a391c21", 00:12:44.440 "assigned_rate_limits": { 00:12:44.440 "rw_ios_per_sec": 0, 00:12:44.440 "rw_mbytes_per_sec": 0, 00:12:44.440 "r_mbytes_per_sec": 0, 00:12:44.440 "w_mbytes_per_sec": 0 00:12:44.440 }, 00:12:44.440 "claimed": false, 00:12:44.440 "zoned": false, 00:12:44.440 "supported_io_types": { 00:12:44.440 "read": true, 00:12:44.440 "write": true, 00:12:44.440 "unmap": true, 00:12:44.440 "flush": true, 00:12:44.440 "reset": true, 00:12:44.440 "nvme_admin": false, 00:12:44.440 "nvme_io": false, 00:12:44.440 "nvme_io_md": false, 00:12:44.440 "write_zeroes": true, 00:12:44.440 "zcopy": false, 00:12:44.440 "get_zone_info": false, 00:12:44.440 "zone_management": false, 00:12:44.440 "zone_append": false, 00:12:44.440 "compare": false, 00:12:44.440 "compare_and_write": false, 00:12:44.440 "abort": false, 00:12:44.440 "seek_hole": false, 00:12:44.440 "seek_data": false, 00:12:44.440 "copy": false, 00:12:44.440 "nvme_iov_md": false 00:12:44.440 }, 00:12:44.440 "memory_domains": [ 00:12:44.440 { 00:12:44.440 "dma_device_id": "system", 00:12:44.440 "dma_device_type": 1 00:12:44.440 }, 00:12:44.440 { 00:12:44.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.440 "dma_device_type": 2 00:12:44.440 }, 00:12:44.440 { 00:12:44.440 "dma_device_id": "system", 00:12:44.440 "dma_device_type": 1 00:12:44.440 }, 00:12:44.440 { 00:12:44.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.440 "dma_device_type": 2 00:12:44.440 }, 00:12:44.440 { 00:12:44.440 "dma_device_id": "system", 00:12:44.440 "dma_device_type": 1 00:12:44.440 }, 00:12:44.440 { 00:12:44.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.440 "dma_device_type": 2 00:12:44.440 }, 00:12:44.440 { 00:12:44.440 "dma_device_id": "system", 00:12:44.440 "dma_device_type": 1 00:12:44.440 }, 00:12:44.440 { 00:12:44.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.440 "dma_device_type": 2 00:12:44.440 } 00:12:44.440 ], 00:12:44.440 "driver_specific": { 00:12:44.440 "raid": { 00:12:44.440 "uuid": "e73aa3bb-9a48-4020-bbdd-53e75a391c21", 00:12:44.440 "strip_size_kb": 64, 00:12:44.440 "state": "online", 00:12:44.440 "raid_level": "raid0", 00:12:44.440 "superblock": true, 00:12:44.440 "num_base_bdevs": 4, 00:12:44.440 "num_base_bdevs_discovered": 4, 00:12:44.440 "num_base_bdevs_operational": 4, 00:12:44.440 "base_bdevs_list": [ 00:12:44.440 { 00:12:44.440 "name": "BaseBdev1", 00:12:44.440 "uuid": "4a1dba2c-3696-4329-a308-056eacb4b693", 00:12:44.440 "is_configured": true, 00:12:44.440 "data_offset": 2048, 00:12:44.440 "data_size": 63488 00:12:44.440 }, 00:12:44.440 { 00:12:44.440 "name": "BaseBdev2", 00:12:44.440 "uuid": "bcf925a1-9da4-4ee3-8b75-78a0da444464", 00:12:44.440 "is_configured": true, 00:12:44.440 "data_offset": 2048, 00:12:44.440 "data_size": 63488 00:12:44.440 }, 00:12:44.440 { 00:12:44.441 "name": "BaseBdev3", 00:12:44.441 "uuid": "d928e640-e090-42f2-abf7-8c04291f2c0e", 00:12:44.441 "is_configured": true, 00:12:44.441 "data_offset": 2048, 00:12:44.441 "data_size": 63488 00:12:44.441 }, 00:12:44.441 { 00:12:44.441 "name": "BaseBdev4", 00:12:44.441 "uuid": "186a2d7c-8064-481e-8ca2-2092a5b7df4f", 00:12:44.441 "is_configured": true, 00:12:44.441 "data_offset": 2048, 00:12:44.441 "data_size": 63488 00:12:44.441 } 00:12:44.441 ] 00:12:44.441 } 00:12:44.441 } 00:12:44.441 }' 00:12:44.441 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.441 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:44.441 BaseBdev2 00:12:44.441 BaseBdev3 00:12:44.441 BaseBdev4' 00:12:44.441 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.700 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.700 [2024-11-15 10:41:15.223776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:44.700 [2024-11-15 10:41:15.223820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.700 [2024-11-15 10:41:15.223886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.959 "name": "Existed_Raid", 00:12:44.959 "uuid": "e73aa3bb-9a48-4020-bbdd-53e75a391c21", 00:12:44.959 "strip_size_kb": 64, 00:12:44.959 "state": "offline", 00:12:44.959 "raid_level": "raid0", 00:12:44.959 "superblock": true, 00:12:44.959 "num_base_bdevs": 4, 00:12:44.959 "num_base_bdevs_discovered": 3, 00:12:44.959 "num_base_bdevs_operational": 3, 00:12:44.959 "base_bdevs_list": [ 00:12:44.959 { 00:12:44.959 "name": null, 00:12:44.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.959 "is_configured": false, 00:12:44.959 "data_offset": 0, 00:12:44.959 "data_size": 63488 00:12:44.959 }, 00:12:44.959 { 00:12:44.959 "name": "BaseBdev2", 00:12:44.959 "uuid": "bcf925a1-9da4-4ee3-8b75-78a0da444464", 00:12:44.959 "is_configured": true, 00:12:44.959 "data_offset": 2048, 00:12:44.959 "data_size": 63488 00:12:44.959 }, 00:12:44.959 { 00:12:44.959 "name": "BaseBdev3", 00:12:44.959 "uuid": "d928e640-e090-42f2-abf7-8c04291f2c0e", 00:12:44.959 "is_configured": true, 00:12:44.959 "data_offset": 2048, 00:12:44.959 "data_size": 63488 00:12:44.959 }, 00:12:44.959 { 00:12:44.959 "name": "BaseBdev4", 00:12:44.959 "uuid": "186a2d7c-8064-481e-8ca2-2092a5b7df4f", 00:12:44.959 "is_configured": true, 00:12:44.959 "data_offset": 2048, 00:12:44.959 "data_size": 63488 00:12:44.959 } 00:12:44.959 ] 00:12:44.959 }' 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.959 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 [2024-11-15 10:41:15.868496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.527 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:45.527 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.527 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:45.527 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.527 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.527 [2024-11-15 10:41:16.029050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.786 [2024-11-15 10:41:16.162013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:45.786 [2024-11-15 10:41:16.162082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.786 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.045 BaseBdev2 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.045 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.045 [ 00:12:46.045 { 00:12:46.045 "name": "BaseBdev2", 00:12:46.045 "aliases": [ 00:12:46.045 "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a" 00:12:46.045 ], 00:12:46.045 "product_name": "Malloc disk", 00:12:46.045 "block_size": 512, 00:12:46.045 "num_blocks": 65536, 00:12:46.045 "uuid": "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a", 00:12:46.045 "assigned_rate_limits": { 00:12:46.045 "rw_ios_per_sec": 0, 00:12:46.045 "rw_mbytes_per_sec": 0, 00:12:46.045 "r_mbytes_per_sec": 0, 00:12:46.045 "w_mbytes_per_sec": 0 00:12:46.045 }, 00:12:46.045 "claimed": false, 00:12:46.045 "zoned": false, 00:12:46.045 "supported_io_types": { 00:12:46.045 "read": true, 00:12:46.045 "write": true, 00:12:46.045 "unmap": true, 00:12:46.045 "flush": true, 00:12:46.045 "reset": true, 00:12:46.045 "nvme_admin": false, 00:12:46.045 "nvme_io": false, 00:12:46.045 "nvme_io_md": false, 00:12:46.045 "write_zeroes": true, 00:12:46.045 "zcopy": true, 00:12:46.045 "get_zone_info": false, 00:12:46.045 "zone_management": false, 00:12:46.045 "zone_append": false, 00:12:46.045 "compare": false, 00:12:46.045 "compare_and_write": false, 00:12:46.045 "abort": true, 00:12:46.045 "seek_hole": false, 00:12:46.045 "seek_data": false, 00:12:46.045 "copy": true, 00:12:46.045 "nvme_iov_md": false 00:12:46.045 }, 00:12:46.045 "memory_domains": [ 00:12:46.045 { 00:12:46.045 "dma_device_id": "system", 00:12:46.045 "dma_device_type": 1 00:12:46.045 }, 00:12:46.046 { 00:12:46.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.046 "dma_device_type": 2 00:12:46.046 } 00:12:46.046 ], 00:12:46.046 "driver_specific": {} 00:12:46.046 } 00:12:46.046 ] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.046 BaseBdev3 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.046 [ 00:12:46.046 { 00:12:46.046 "name": "BaseBdev3", 00:12:46.046 "aliases": [ 00:12:46.046 "67e67d0e-75d9-4a98-a262-276dfd20e258" 00:12:46.046 ], 00:12:46.046 "product_name": "Malloc disk", 00:12:46.046 "block_size": 512, 00:12:46.046 "num_blocks": 65536, 00:12:46.046 "uuid": "67e67d0e-75d9-4a98-a262-276dfd20e258", 00:12:46.046 "assigned_rate_limits": { 00:12:46.046 "rw_ios_per_sec": 0, 00:12:46.046 "rw_mbytes_per_sec": 0, 00:12:46.046 "r_mbytes_per_sec": 0, 00:12:46.046 "w_mbytes_per_sec": 0 00:12:46.046 }, 00:12:46.046 "claimed": false, 00:12:46.046 "zoned": false, 00:12:46.046 "supported_io_types": { 00:12:46.046 "read": true, 00:12:46.046 "write": true, 00:12:46.046 "unmap": true, 00:12:46.046 "flush": true, 00:12:46.046 "reset": true, 00:12:46.046 "nvme_admin": false, 00:12:46.046 "nvme_io": false, 00:12:46.046 "nvme_io_md": false, 00:12:46.046 "write_zeroes": true, 00:12:46.046 "zcopy": true, 00:12:46.046 "get_zone_info": false, 00:12:46.046 "zone_management": false, 00:12:46.046 "zone_append": false, 00:12:46.046 "compare": false, 00:12:46.046 "compare_and_write": false, 00:12:46.046 "abort": true, 00:12:46.046 "seek_hole": false, 00:12:46.046 "seek_data": false, 00:12:46.046 "copy": true, 00:12:46.046 "nvme_iov_md": false 00:12:46.046 }, 00:12:46.046 "memory_domains": [ 00:12:46.046 { 00:12:46.046 "dma_device_id": "system", 00:12:46.046 "dma_device_type": 1 00:12:46.046 }, 00:12:46.046 { 00:12:46.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.046 "dma_device_type": 2 00:12:46.046 } 00:12:46.046 ], 00:12:46.046 "driver_specific": {} 00:12:46.046 } 00:12:46.046 ] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.046 BaseBdev4 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.046 [ 00:12:46.046 { 00:12:46.046 "name": "BaseBdev4", 00:12:46.046 "aliases": [ 00:12:46.046 "cacad1a6-4890-4ecc-afdb-75b7ae58d43d" 00:12:46.046 ], 00:12:46.046 "product_name": "Malloc disk", 00:12:46.046 "block_size": 512, 00:12:46.046 "num_blocks": 65536, 00:12:46.046 "uuid": "cacad1a6-4890-4ecc-afdb-75b7ae58d43d", 00:12:46.046 "assigned_rate_limits": { 00:12:46.046 "rw_ios_per_sec": 0, 00:12:46.046 "rw_mbytes_per_sec": 0, 00:12:46.046 "r_mbytes_per_sec": 0, 00:12:46.046 "w_mbytes_per_sec": 0 00:12:46.046 }, 00:12:46.046 "claimed": false, 00:12:46.046 "zoned": false, 00:12:46.046 "supported_io_types": { 00:12:46.046 "read": true, 00:12:46.046 "write": true, 00:12:46.046 "unmap": true, 00:12:46.046 "flush": true, 00:12:46.046 "reset": true, 00:12:46.046 "nvme_admin": false, 00:12:46.046 "nvme_io": false, 00:12:46.046 "nvme_io_md": false, 00:12:46.046 "write_zeroes": true, 00:12:46.046 "zcopy": true, 00:12:46.046 "get_zone_info": false, 00:12:46.046 "zone_management": false, 00:12:46.046 "zone_append": false, 00:12:46.046 "compare": false, 00:12:46.046 "compare_and_write": false, 00:12:46.046 "abort": true, 00:12:46.046 "seek_hole": false, 00:12:46.046 "seek_data": false, 00:12:46.046 "copy": true, 00:12:46.046 "nvme_iov_md": false 00:12:46.046 }, 00:12:46.046 "memory_domains": [ 00:12:46.046 { 00:12:46.046 "dma_device_id": "system", 00:12:46.046 "dma_device_type": 1 00:12:46.046 }, 00:12:46.046 { 00:12:46.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.046 "dma_device_type": 2 00:12:46.046 } 00:12:46.046 ], 00:12:46.046 "driver_specific": {} 00:12:46.046 } 00:12:46.046 ] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.046 [2024-11-15 10:41:16.519849] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.046 [2024-11-15 10:41:16.519907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.046 [2024-11-15 10:41:16.519940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.046 [2024-11-15 10:41:16.522241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.046 [2024-11-15 10:41:16.522321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.046 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.047 "name": "Existed_Raid", 00:12:46.047 "uuid": "d9481076-2eca-4caa-b609-fa8bcfbd6ec8", 00:12:46.047 "strip_size_kb": 64, 00:12:46.047 "state": "configuring", 00:12:46.047 "raid_level": "raid0", 00:12:46.047 "superblock": true, 00:12:46.047 "num_base_bdevs": 4, 00:12:46.047 "num_base_bdevs_discovered": 3, 00:12:46.047 "num_base_bdevs_operational": 4, 00:12:46.047 "base_bdevs_list": [ 00:12:46.047 { 00:12:46.047 "name": "BaseBdev1", 00:12:46.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.047 "is_configured": false, 00:12:46.047 "data_offset": 0, 00:12:46.047 "data_size": 0 00:12:46.047 }, 00:12:46.047 { 00:12:46.047 "name": "BaseBdev2", 00:12:46.047 "uuid": "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a", 00:12:46.047 "is_configured": true, 00:12:46.047 "data_offset": 2048, 00:12:46.047 "data_size": 63488 00:12:46.047 }, 00:12:46.047 { 00:12:46.047 "name": "BaseBdev3", 00:12:46.047 "uuid": "67e67d0e-75d9-4a98-a262-276dfd20e258", 00:12:46.047 "is_configured": true, 00:12:46.047 "data_offset": 2048, 00:12:46.047 "data_size": 63488 00:12:46.047 }, 00:12:46.047 { 00:12:46.047 "name": "BaseBdev4", 00:12:46.047 "uuid": "cacad1a6-4890-4ecc-afdb-75b7ae58d43d", 00:12:46.047 "is_configured": true, 00:12:46.047 "data_offset": 2048, 00:12:46.047 "data_size": 63488 00:12:46.047 } 00:12:46.047 ] 00:12:46.047 }' 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.047 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.616 [2024-11-15 10:41:17.068001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.616 "name": "Existed_Raid", 00:12:46.616 "uuid": "d9481076-2eca-4caa-b609-fa8bcfbd6ec8", 00:12:46.616 "strip_size_kb": 64, 00:12:46.616 "state": "configuring", 00:12:46.616 "raid_level": "raid0", 00:12:46.616 "superblock": true, 00:12:46.616 "num_base_bdevs": 4, 00:12:46.616 "num_base_bdevs_discovered": 2, 00:12:46.616 "num_base_bdevs_operational": 4, 00:12:46.616 "base_bdevs_list": [ 00:12:46.616 { 00:12:46.616 "name": "BaseBdev1", 00:12:46.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.616 "is_configured": false, 00:12:46.616 "data_offset": 0, 00:12:46.616 "data_size": 0 00:12:46.616 }, 00:12:46.616 { 00:12:46.616 "name": null, 00:12:46.616 "uuid": "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a", 00:12:46.616 "is_configured": false, 00:12:46.616 "data_offset": 0, 00:12:46.616 "data_size": 63488 00:12:46.616 }, 00:12:46.616 { 00:12:46.616 "name": "BaseBdev3", 00:12:46.616 "uuid": "67e67d0e-75d9-4a98-a262-276dfd20e258", 00:12:46.616 "is_configured": true, 00:12:46.616 "data_offset": 2048, 00:12:46.616 "data_size": 63488 00:12:46.616 }, 00:12:46.616 { 00:12:46.616 "name": "BaseBdev4", 00:12:46.616 "uuid": "cacad1a6-4890-4ecc-afdb-75b7ae58d43d", 00:12:46.616 "is_configured": true, 00:12:46.616 "data_offset": 2048, 00:12:46.616 "data_size": 63488 00:12:46.616 } 00:12:46.616 ] 00:12:46.616 }' 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.616 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.210 [2024-11-15 10:41:17.677796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.210 BaseBdev1 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.210 [ 00:12:47.210 { 00:12:47.210 "name": "BaseBdev1", 00:12:47.210 "aliases": [ 00:12:47.210 "7617a2ca-2a6f-4e1a-b634-d75d1809d71c" 00:12:47.210 ], 00:12:47.210 "product_name": "Malloc disk", 00:12:47.210 "block_size": 512, 00:12:47.210 "num_blocks": 65536, 00:12:47.210 "uuid": "7617a2ca-2a6f-4e1a-b634-d75d1809d71c", 00:12:47.210 "assigned_rate_limits": { 00:12:47.210 "rw_ios_per_sec": 0, 00:12:47.210 "rw_mbytes_per_sec": 0, 00:12:47.210 "r_mbytes_per_sec": 0, 00:12:47.210 "w_mbytes_per_sec": 0 00:12:47.210 }, 00:12:47.210 "claimed": true, 00:12:47.210 "claim_type": "exclusive_write", 00:12:47.210 "zoned": false, 00:12:47.210 "supported_io_types": { 00:12:47.210 "read": true, 00:12:47.210 "write": true, 00:12:47.210 "unmap": true, 00:12:47.210 "flush": true, 00:12:47.210 "reset": true, 00:12:47.210 "nvme_admin": false, 00:12:47.210 "nvme_io": false, 00:12:47.210 "nvme_io_md": false, 00:12:47.210 "write_zeroes": true, 00:12:47.210 "zcopy": true, 00:12:47.210 "get_zone_info": false, 00:12:47.210 "zone_management": false, 00:12:47.210 "zone_append": false, 00:12:47.210 "compare": false, 00:12:47.210 "compare_and_write": false, 00:12:47.210 "abort": true, 00:12:47.210 "seek_hole": false, 00:12:47.210 "seek_data": false, 00:12:47.210 "copy": true, 00:12:47.210 "nvme_iov_md": false 00:12:47.210 }, 00:12:47.210 "memory_domains": [ 00:12:47.210 { 00:12:47.210 "dma_device_id": "system", 00:12:47.210 "dma_device_type": 1 00:12:47.210 }, 00:12:47.210 { 00:12:47.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.210 "dma_device_type": 2 00:12:47.210 } 00:12:47.210 ], 00:12:47.210 "driver_specific": {} 00:12:47.210 } 00:12:47.210 ] 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.210 "name": "Existed_Raid", 00:12:47.210 "uuid": "d9481076-2eca-4caa-b609-fa8bcfbd6ec8", 00:12:47.210 "strip_size_kb": 64, 00:12:47.210 "state": "configuring", 00:12:47.210 "raid_level": "raid0", 00:12:47.210 "superblock": true, 00:12:47.210 "num_base_bdevs": 4, 00:12:47.210 "num_base_bdevs_discovered": 3, 00:12:47.210 "num_base_bdevs_operational": 4, 00:12:47.210 "base_bdevs_list": [ 00:12:47.210 { 00:12:47.210 "name": "BaseBdev1", 00:12:47.210 "uuid": "7617a2ca-2a6f-4e1a-b634-d75d1809d71c", 00:12:47.210 "is_configured": true, 00:12:47.210 "data_offset": 2048, 00:12:47.210 "data_size": 63488 00:12:47.210 }, 00:12:47.210 { 00:12:47.210 "name": null, 00:12:47.210 "uuid": "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a", 00:12:47.210 "is_configured": false, 00:12:47.210 "data_offset": 0, 00:12:47.210 "data_size": 63488 00:12:47.210 }, 00:12:47.210 { 00:12:47.210 "name": "BaseBdev3", 00:12:47.210 "uuid": "67e67d0e-75d9-4a98-a262-276dfd20e258", 00:12:47.210 "is_configured": true, 00:12:47.210 "data_offset": 2048, 00:12:47.210 "data_size": 63488 00:12:47.210 }, 00:12:47.210 { 00:12:47.210 "name": "BaseBdev4", 00:12:47.210 "uuid": "cacad1a6-4890-4ecc-afdb-75b7ae58d43d", 00:12:47.210 "is_configured": true, 00:12:47.210 "data_offset": 2048, 00:12:47.210 "data_size": 63488 00:12:47.210 } 00:12:47.210 ] 00:12:47.210 }' 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.210 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.777 [2024-11-15 10:41:18.270071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.777 "name": "Existed_Raid", 00:12:47.777 "uuid": "d9481076-2eca-4caa-b609-fa8bcfbd6ec8", 00:12:47.777 "strip_size_kb": 64, 00:12:47.777 "state": "configuring", 00:12:47.777 "raid_level": "raid0", 00:12:47.777 "superblock": true, 00:12:47.777 "num_base_bdevs": 4, 00:12:47.777 "num_base_bdevs_discovered": 2, 00:12:47.777 "num_base_bdevs_operational": 4, 00:12:47.777 "base_bdevs_list": [ 00:12:47.777 { 00:12:47.777 "name": "BaseBdev1", 00:12:47.777 "uuid": "7617a2ca-2a6f-4e1a-b634-d75d1809d71c", 00:12:47.777 "is_configured": true, 00:12:47.777 "data_offset": 2048, 00:12:47.777 "data_size": 63488 00:12:47.777 }, 00:12:47.777 { 00:12:47.777 "name": null, 00:12:47.777 "uuid": "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a", 00:12:47.777 "is_configured": false, 00:12:47.777 "data_offset": 0, 00:12:47.777 "data_size": 63488 00:12:47.777 }, 00:12:47.777 { 00:12:47.777 "name": null, 00:12:47.777 "uuid": "67e67d0e-75d9-4a98-a262-276dfd20e258", 00:12:47.777 "is_configured": false, 00:12:47.777 "data_offset": 0, 00:12:47.777 "data_size": 63488 00:12:47.777 }, 00:12:47.777 { 00:12:47.777 "name": "BaseBdev4", 00:12:47.777 "uuid": "cacad1a6-4890-4ecc-afdb-75b7ae58d43d", 00:12:47.777 "is_configured": true, 00:12:47.777 "data_offset": 2048, 00:12:47.777 "data_size": 63488 00:12:47.777 } 00:12:47.777 ] 00:12:47.777 }' 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.777 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.344 [2024-11-15 10:41:18.806181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.344 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.345 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.345 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.345 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.345 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.345 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.345 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.345 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.345 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.345 "name": "Existed_Raid", 00:12:48.345 "uuid": "d9481076-2eca-4caa-b609-fa8bcfbd6ec8", 00:12:48.345 "strip_size_kb": 64, 00:12:48.345 "state": "configuring", 00:12:48.345 "raid_level": "raid0", 00:12:48.345 "superblock": true, 00:12:48.345 "num_base_bdevs": 4, 00:12:48.345 "num_base_bdevs_discovered": 3, 00:12:48.345 "num_base_bdevs_operational": 4, 00:12:48.345 "base_bdevs_list": [ 00:12:48.345 { 00:12:48.345 "name": "BaseBdev1", 00:12:48.345 "uuid": "7617a2ca-2a6f-4e1a-b634-d75d1809d71c", 00:12:48.345 "is_configured": true, 00:12:48.345 "data_offset": 2048, 00:12:48.345 "data_size": 63488 00:12:48.345 }, 00:12:48.345 { 00:12:48.345 "name": null, 00:12:48.345 "uuid": "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a", 00:12:48.345 "is_configured": false, 00:12:48.345 "data_offset": 0, 00:12:48.345 "data_size": 63488 00:12:48.345 }, 00:12:48.345 { 00:12:48.345 "name": "BaseBdev3", 00:12:48.345 "uuid": "67e67d0e-75d9-4a98-a262-276dfd20e258", 00:12:48.345 "is_configured": true, 00:12:48.345 "data_offset": 2048, 00:12:48.345 "data_size": 63488 00:12:48.345 }, 00:12:48.345 { 00:12:48.345 "name": "BaseBdev4", 00:12:48.345 "uuid": "cacad1a6-4890-4ecc-afdb-75b7ae58d43d", 00:12:48.345 "is_configured": true, 00:12:48.345 "data_offset": 2048, 00:12:48.345 "data_size": 63488 00:12:48.345 } 00:12:48.345 ] 00:12:48.345 }' 00:12:48.345 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.345 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.911 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.912 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.912 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.912 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:48.912 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.912 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:48.912 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:48.912 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.912 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.912 [2024-11-15 10:41:19.410390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.170 "name": "Existed_Raid", 00:12:49.170 "uuid": "d9481076-2eca-4caa-b609-fa8bcfbd6ec8", 00:12:49.170 "strip_size_kb": 64, 00:12:49.170 "state": "configuring", 00:12:49.170 "raid_level": "raid0", 00:12:49.170 "superblock": true, 00:12:49.170 "num_base_bdevs": 4, 00:12:49.170 "num_base_bdevs_discovered": 2, 00:12:49.170 "num_base_bdevs_operational": 4, 00:12:49.170 "base_bdevs_list": [ 00:12:49.170 { 00:12:49.170 "name": null, 00:12:49.170 "uuid": "7617a2ca-2a6f-4e1a-b634-d75d1809d71c", 00:12:49.170 "is_configured": false, 00:12:49.170 "data_offset": 0, 00:12:49.170 "data_size": 63488 00:12:49.170 }, 00:12:49.170 { 00:12:49.170 "name": null, 00:12:49.170 "uuid": "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a", 00:12:49.170 "is_configured": false, 00:12:49.170 "data_offset": 0, 00:12:49.170 "data_size": 63488 00:12:49.170 }, 00:12:49.170 { 00:12:49.170 "name": "BaseBdev3", 00:12:49.170 "uuid": "67e67d0e-75d9-4a98-a262-276dfd20e258", 00:12:49.170 "is_configured": true, 00:12:49.170 "data_offset": 2048, 00:12:49.170 "data_size": 63488 00:12:49.170 }, 00:12:49.170 { 00:12:49.170 "name": "BaseBdev4", 00:12:49.170 "uuid": "cacad1a6-4890-4ecc-afdb-75b7ae58d43d", 00:12:49.170 "is_configured": true, 00:12:49.170 "data_offset": 2048, 00:12:49.170 "data_size": 63488 00:12:49.170 } 00:12:49.170 ] 00:12:49.170 }' 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.170 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.737 [2024-11-15 10:41:20.082980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.737 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.737 "name": "Existed_Raid", 00:12:49.737 "uuid": "d9481076-2eca-4caa-b609-fa8bcfbd6ec8", 00:12:49.737 "strip_size_kb": 64, 00:12:49.737 "state": "configuring", 00:12:49.737 "raid_level": "raid0", 00:12:49.737 "superblock": true, 00:12:49.737 "num_base_bdevs": 4, 00:12:49.737 "num_base_bdevs_discovered": 3, 00:12:49.737 "num_base_bdevs_operational": 4, 00:12:49.737 "base_bdevs_list": [ 00:12:49.737 { 00:12:49.737 "name": null, 00:12:49.738 "uuid": "7617a2ca-2a6f-4e1a-b634-d75d1809d71c", 00:12:49.738 "is_configured": false, 00:12:49.738 "data_offset": 0, 00:12:49.738 "data_size": 63488 00:12:49.738 }, 00:12:49.738 { 00:12:49.738 "name": "BaseBdev2", 00:12:49.738 "uuid": "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a", 00:12:49.738 "is_configured": true, 00:12:49.738 "data_offset": 2048, 00:12:49.738 "data_size": 63488 00:12:49.738 }, 00:12:49.738 { 00:12:49.738 "name": "BaseBdev3", 00:12:49.738 "uuid": "67e67d0e-75d9-4a98-a262-276dfd20e258", 00:12:49.738 "is_configured": true, 00:12:49.738 "data_offset": 2048, 00:12:49.738 "data_size": 63488 00:12:49.738 }, 00:12:49.738 { 00:12:49.738 "name": "BaseBdev4", 00:12:49.738 "uuid": "cacad1a6-4890-4ecc-afdb-75b7ae58d43d", 00:12:49.738 "is_configured": true, 00:12:49.738 "data_offset": 2048, 00:12:49.738 "data_size": 63488 00:12:49.738 } 00:12:49.738 ] 00:12:49.738 }' 00:12:49.738 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.738 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7617a2ca-2a6f-4e1a-b634-d75d1809d71c 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.304 [2024-11-15 10:41:20.732887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:50.304 [2024-11-15 10:41:20.733426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:50.304 [2024-11-15 10:41:20.733452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:50.304 NewBaseBdev 00:12:50.304 [2024-11-15 10:41:20.733767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:50.304 [2024-11-15 10:41:20.733939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:50.304 [2024-11-15 10:41:20.733959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:50.304 [2024-11-15 10:41:20.734120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:50.304 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.305 [ 00:12:50.305 { 00:12:50.305 "name": "NewBaseBdev", 00:12:50.305 "aliases": [ 00:12:50.305 "7617a2ca-2a6f-4e1a-b634-d75d1809d71c" 00:12:50.305 ], 00:12:50.305 "product_name": "Malloc disk", 00:12:50.305 "block_size": 512, 00:12:50.305 "num_blocks": 65536, 00:12:50.305 "uuid": "7617a2ca-2a6f-4e1a-b634-d75d1809d71c", 00:12:50.305 "assigned_rate_limits": { 00:12:50.305 "rw_ios_per_sec": 0, 00:12:50.305 "rw_mbytes_per_sec": 0, 00:12:50.305 "r_mbytes_per_sec": 0, 00:12:50.305 "w_mbytes_per_sec": 0 00:12:50.305 }, 00:12:50.305 "claimed": true, 00:12:50.305 "claim_type": "exclusive_write", 00:12:50.305 "zoned": false, 00:12:50.305 "supported_io_types": { 00:12:50.305 "read": true, 00:12:50.305 "write": true, 00:12:50.305 "unmap": true, 00:12:50.305 "flush": true, 00:12:50.305 "reset": true, 00:12:50.305 "nvme_admin": false, 00:12:50.305 "nvme_io": false, 00:12:50.305 "nvme_io_md": false, 00:12:50.305 "write_zeroes": true, 00:12:50.305 "zcopy": true, 00:12:50.305 "get_zone_info": false, 00:12:50.305 "zone_management": false, 00:12:50.305 "zone_append": false, 00:12:50.305 "compare": false, 00:12:50.305 "compare_and_write": false, 00:12:50.305 "abort": true, 00:12:50.305 "seek_hole": false, 00:12:50.305 "seek_data": false, 00:12:50.305 "copy": true, 00:12:50.305 "nvme_iov_md": false 00:12:50.305 }, 00:12:50.305 "memory_domains": [ 00:12:50.305 { 00:12:50.305 "dma_device_id": "system", 00:12:50.305 "dma_device_type": 1 00:12:50.305 }, 00:12:50.305 { 00:12:50.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.305 "dma_device_type": 2 00:12:50.305 } 00:12:50.305 ], 00:12:50.305 "driver_specific": {} 00:12:50.305 } 00:12:50.305 ] 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.305 "name": "Existed_Raid", 00:12:50.305 "uuid": "d9481076-2eca-4caa-b609-fa8bcfbd6ec8", 00:12:50.305 "strip_size_kb": 64, 00:12:50.305 "state": "online", 00:12:50.305 "raid_level": "raid0", 00:12:50.305 "superblock": true, 00:12:50.305 "num_base_bdevs": 4, 00:12:50.305 "num_base_bdevs_discovered": 4, 00:12:50.305 "num_base_bdevs_operational": 4, 00:12:50.305 "base_bdevs_list": [ 00:12:50.305 { 00:12:50.305 "name": "NewBaseBdev", 00:12:50.305 "uuid": "7617a2ca-2a6f-4e1a-b634-d75d1809d71c", 00:12:50.305 "is_configured": true, 00:12:50.305 "data_offset": 2048, 00:12:50.305 "data_size": 63488 00:12:50.305 }, 00:12:50.305 { 00:12:50.305 "name": "BaseBdev2", 00:12:50.305 "uuid": "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a", 00:12:50.305 "is_configured": true, 00:12:50.305 "data_offset": 2048, 00:12:50.305 "data_size": 63488 00:12:50.305 }, 00:12:50.305 { 00:12:50.305 "name": "BaseBdev3", 00:12:50.305 "uuid": "67e67d0e-75d9-4a98-a262-276dfd20e258", 00:12:50.305 "is_configured": true, 00:12:50.305 "data_offset": 2048, 00:12:50.305 "data_size": 63488 00:12:50.305 }, 00:12:50.305 { 00:12:50.305 "name": "BaseBdev4", 00:12:50.305 "uuid": "cacad1a6-4890-4ecc-afdb-75b7ae58d43d", 00:12:50.305 "is_configured": true, 00:12:50.305 "data_offset": 2048, 00:12:50.305 "data_size": 63488 00:12:50.305 } 00:12:50.305 ] 00:12:50.305 }' 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.305 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:50.872 [2024-11-15 10:41:21.289555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.872 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:50.872 "name": "Existed_Raid", 00:12:50.872 "aliases": [ 00:12:50.872 "d9481076-2eca-4caa-b609-fa8bcfbd6ec8" 00:12:50.872 ], 00:12:50.872 "product_name": "Raid Volume", 00:12:50.872 "block_size": 512, 00:12:50.872 "num_blocks": 253952, 00:12:50.872 "uuid": "d9481076-2eca-4caa-b609-fa8bcfbd6ec8", 00:12:50.872 "assigned_rate_limits": { 00:12:50.872 "rw_ios_per_sec": 0, 00:12:50.872 "rw_mbytes_per_sec": 0, 00:12:50.872 "r_mbytes_per_sec": 0, 00:12:50.872 "w_mbytes_per_sec": 0 00:12:50.872 }, 00:12:50.872 "claimed": false, 00:12:50.872 "zoned": false, 00:12:50.872 "supported_io_types": { 00:12:50.872 "read": true, 00:12:50.872 "write": true, 00:12:50.872 "unmap": true, 00:12:50.872 "flush": true, 00:12:50.872 "reset": true, 00:12:50.872 "nvme_admin": false, 00:12:50.872 "nvme_io": false, 00:12:50.872 "nvme_io_md": false, 00:12:50.872 "write_zeroes": true, 00:12:50.872 "zcopy": false, 00:12:50.872 "get_zone_info": false, 00:12:50.872 "zone_management": false, 00:12:50.872 "zone_append": false, 00:12:50.872 "compare": false, 00:12:50.872 "compare_and_write": false, 00:12:50.872 "abort": false, 00:12:50.872 "seek_hole": false, 00:12:50.872 "seek_data": false, 00:12:50.872 "copy": false, 00:12:50.872 "nvme_iov_md": false 00:12:50.872 }, 00:12:50.872 "memory_domains": [ 00:12:50.872 { 00:12:50.872 "dma_device_id": "system", 00:12:50.872 "dma_device_type": 1 00:12:50.872 }, 00:12:50.872 { 00:12:50.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.872 "dma_device_type": 2 00:12:50.872 }, 00:12:50.872 { 00:12:50.872 "dma_device_id": "system", 00:12:50.872 "dma_device_type": 1 00:12:50.872 }, 00:12:50.872 { 00:12:50.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.872 "dma_device_type": 2 00:12:50.872 }, 00:12:50.872 { 00:12:50.872 "dma_device_id": "system", 00:12:50.872 "dma_device_type": 1 00:12:50.872 }, 00:12:50.872 { 00:12:50.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.872 "dma_device_type": 2 00:12:50.872 }, 00:12:50.872 { 00:12:50.872 "dma_device_id": "system", 00:12:50.872 "dma_device_type": 1 00:12:50.872 }, 00:12:50.872 { 00:12:50.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.872 "dma_device_type": 2 00:12:50.872 } 00:12:50.872 ], 00:12:50.872 "driver_specific": { 00:12:50.872 "raid": { 00:12:50.872 "uuid": "d9481076-2eca-4caa-b609-fa8bcfbd6ec8", 00:12:50.872 "strip_size_kb": 64, 00:12:50.872 "state": "online", 00:12:50.872 "raid_level": "raid0", 00:12:50.872 "superblock": true, 00:12:50.872 "num_base_bdevs": 4, 00:12:50.872 "num_base_bdevs_discovered": 4, 00:12:50.872 "num_base_bdevs_operational": 4, 00:12:50.872 "base_bdevs_list": [ 00:12:50.872 { 00:12:50.872 "name": "NewBaseBdev", 00:12:50.872 "uuid": "7617a2ca-2a6f-4e1a-b634-d75d1809d71c", 00:12:50.872 "is_configured": true, 00:12:50.872 "data_offset": 2048, 00:12:50.872 "data_size": 63488 00:12:50.872 }, 00:12:50.872 { 00:12:50.872 "name": "BaseBdev2", 00:12:50.872 "uuid": "2c3000a7-6d20-4b1e-b653-70bd4c4d4c4a", 00:12:50.872 "is_configured": true, 00:12:50.873 "data_offset": 2048, 00:12:50.873 "data_size": 63488 00:12:50.873 }, 00:12:50.873 { 00:12:50.873 "name": "BaseBdev3", 00:12:50.873 "uuid": "67e67d0e-75d9-4a98-a262-276dfd20e258", 00:12:50.873 "is_configured": true, 00:12:50.873 "data_offset": 2048, 00:12:50.873 "data_size": 63488 00:12:50.873 }, 00:12:50.873 { 00:12:50.873 "name": "BaseBdev4", 00:12:50.873 "uuid": "cacad1a6-4890-4ecc-afdb-75b7ae58d43d", 00:12:50.873 "is_configured": true, 00:12:50.873 "data_offset": 2048, 00:12:50.873 "data_size": 63488 00:12:50.873 } 00:12:50.873 ] 00:12:50.873 } 00:12:50.873 } 00:12:50.873 }' 00:12:50.873 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:50.873 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:50.873 BaseBdev2 00:12:50.873 BaseBdev3 00:12:50.873 BaseBdev4' 00:12:50.873 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.873 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:50.873 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.873 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.873 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:50.873 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.873 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.131 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.131 [2024-11-15 10:41:21.645172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.131 [2024-11-15 10:41:21.645210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.131 [2024-11-15 10:41:21.645301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.131 [2024-11-15 10:41:21.645416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.132 [2024-11-15 10:41:21.645436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70330 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70330 ']' 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70330 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70330 00:12:51.132 killing process with pid 70330 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70330' 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70330 00:12:51.132 [2024-11-15 10:41:21.684840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.132 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70330 00:12:51.698 [2024-11-15 10:41:22.019553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.634 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:52.634 00:12:52.634 real 0m12.660s 00:12:52.634 user 0m21.243s 00:12:52.634 sys 0m1.638s 00:12:52.634 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.634 ************************************ 00:12:52.634 END TEST raid_state_function_test_sb 00:12:52.634 ************************************ 00:12:52.634 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.634 10:41:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:52.634 10:41:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:52.634 10:41:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.634 10:41:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.634 ************************************ 00:12:52.634 START TEST raid_superblock_test 00:12:52.634 ************************************ 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71015 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71015 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 71015 ']' 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:52.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:52.634 10:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.892 [2024-11-15 10:41:23.221801] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:12:52.892 [2024-11-15 10:41:23.221978] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71015 ] 00:12:52.892 [2024-11-15 10:41:23.408520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.150 [2024-11-15 10:41:23.536052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.408 [2024-11-15 10:41:23.755859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.408 [2024-11-15 10:41:23.755939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.666 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:53.666 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:53.666 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:53.666 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:53.666 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:53.666 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:53.667 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:53.667 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:53.667 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:53.667 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:53.667 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:53.667 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.667 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.938 malloc1 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.938 [2024-11-15 10:41:24.259426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:53.938 [2024-11-15 10:41:24.259492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.938 [2024-11-15 10:41:24.259522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:53.938 [2024-11-15 10:41:24.259538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.938 [2024-11-15 10:41:24.262476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.938 [2024-11-15 10:41:24.262519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:53.938 pt1 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.938 malloc2 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.938 [2024-11-15 10:41:24.310656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:53.938 [2024-11-15 10:41:24.310719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.938 [2024-11-15 10:41:24.310753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:53.938 [2024-11-15 10:41:24.310767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.938 [2024-11-15 10:41:24.313320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.938 [2024-11-15 10:41:24.313381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:53.938 pt2 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.938 malloc3 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.938 [2024-11-15 10:41:24.371932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:53.938 [2024-11-15 10:41:24.371992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.938 [2024-11-15 10:41:24.372023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:53.938 [2024-11-15 10:41:24.372037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.938 [2024-11-15 10:41:24.374596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.938 [2024-11-15 10:41:24.374638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:53.938 pt3 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:53.938 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.939 malloc4 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.939 [2024-11-15 10:41:24.422738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:53.939 [2024-11-15 10:41:24.422806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.939 [2024-11-15 10:41:24.422837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:53.939 [2024-11-15 10:41:24.422852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.939 [2024-11-15 10:41:24.425453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.939 [2024-11-15 10:41:24.425493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:53.939 pt4 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.939 [2024-11-15 10:41:24.430774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:53.939 [2024-11-15 10:41:24.433043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:53.939 [2024-11-15 10:41:24.433165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:53.939 [2024-11-15 10:41:24.433245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:53.939 [2024-11-15 10:41:24.433515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:53.939 [2024-11-15 10:41:24.433535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:53.939 [2024-11-15 10:41:24.433851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:53.939 [2024-11-15 10:41:24.434067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:53.939 [2024-11-15 10:41:24.434090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:53.939 [2024-11-15 10:41:24.434273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.939 "name": "raid_bdev1", 00:12:53.939 "uuid": "ddc179a6-749c-48d7-8c91-e96b1f84f5f9", 00:12:53.939 "strip_size_kb": 64, 00:12:53.939 "state": "online", 00:12:53.939 "raid_level": "raid0", 00:12:53.939 "superblock": true, 00:12:53.939 "num_base_bdevs": 4, 00:12:53.939 "num_base_bdevs_discovered": 4, 00:12:53.939 "num_base_bdevs_operational": 4, 00:12:53.939 "base_bdevs_list": [ 00:12:53.939 { 00:12:53.939 "name": "pt1", 00:12:53.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:53.939 "is_configured": true, 00:12:53.939 "data_offset": 2048, 00:12:53.939 "data_size": 63488 00:12:53.939 }, 00:12:53.939 { 00:12:53.939 "name": "pt2", 00:12:53.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:53.939 "is_configured": true, 00:12:53.939 "data_offset": 2048, 00:12:53.939 "data_size": 63488 00:12:53.939 }, 00:12:53.939 { 00:12:53.939 "name": "pt3", 00:12:53.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:53.939 "is_configured": true, 00:12:53.939 "data_offset": 2048, 00:12:53.939 "data_size": 63488 00:12:53.939 }, 00:12:53.939 { 00:12:53.939 "name": "pt4", 00:12:53.939 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:53.939 "is_configured": true, 00:12:53.939 "data_offset": 2048, 00:12:53.939 "data_size": 63488 00:12:53.939 } 00:12:53.939 ] 00:12:53.939 }' 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.939 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.506 [2024-11-15 10:41:24.951308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.506 "name": "raid_bdev1", 00:12:54.506 "aliases": [ 00:12:54.506 "ddc179a6-749c-48d7-8c91-e96b1f84f5f9" 00:12:54.506 ], 00:12:54.506 "product_name": "Raid Volume", 00:12:54.506 "block_size": 512, 00:12:54.506 "num_blocks": 253952, 00:12:54.506 "uuid": "ddc179a6-749c-48d7-8c91-e96b1f84f5f9", 00:12:54.506 "assigned_rate_limits": { 00:12:54.506 "rw_ios_per_sec": 0, 00:12:54.506 "rw_mbytes_per_sec": 0, 00:12:54.506 "r_mbytes_per_sec": 0, 00:12:54.506 "w_mbytes_per_sec": 0 00:12:54.506 }, 00:12:54.506 "claimed": false, 00:12:54.506 "zoned": false, 00:12:54.506 "supported_io_types": { 00:12:54.506 "read": true, 00:12:54.506 "write": true, 00:12:54.506 "unmap": true, 00:12:54.506 "flush": true, 00:12:54.506 "reset": true, 00:12:54.506 "nvme_admin": false, 00:12:54.506 "nvme_io": false, 00:12:54.506 "nvme_io_md": false, 00:12:54.506 "write_zeroes": true, 00:12:54.506 "zcopy": false, 00:12:54.506 "get_zone_info": false, 00:12:54.506 "zone_management": false, 00:12:54.506 "zone_append": false, 00:12:54.506 "compare": false, 00:12:54.506 "compare_and_write": false, 00:12:54.506 "abort": false, 00:12:54.506 "seek_hole": false, 00:12:54.506 "seek_data": false, 00:12:54.506 "copy": false, 00:12:54.506 "nvme_iov_md": false 00:12:54.506 }, 00:12:54.506 "memory_domains": [ 00:12:54.506 { 00:12:54.506 "dma_device_id": "system", 00:12:54.506 "dma_device_type": 1 00:12:54.506 }, 00:12:54.506 { 00:12:54.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.506 "dma_device_type": 2 00:12:54.506 }, 00:12:54.506 { 00:12:54.506 "dma_device_id": "system", 00:12:54.506 "dma_device_type": 1 00:12:54.506 }, 00:12:54.506 { 00:12:54.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.506 "dma_device_type": 2 00:12:54.506 }, 00:12:54.506 { 00:12:54.506 "dma_device_id": "system", 00:12:54.506 "dma_device_type": 1 00:12:54.506 }, 00:12:54.506 { 00:12:54.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.506 "dma_device_type": 2 00:12:54.506 }, 00:12:54.506 { 00:12:54.506 "dma_device_id": "system", 00:12:54.506 "dma_device_type": 1 00:12:54.506 }, 00:12:54.506 { 00:12:54.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.506 "dma_device_type": 2 00:12:54.506 } 00:12:54.506 ], 00:12:54.506 "driver_specific": { 00:12:54.506 "raid": { 00:12:54.506 "uuid": "ddc179a6-749c-48d7-8c91-e96b1f84f5f9", 00:12:54.506 "strip_size_kb": 64, 00:12:54.506 "state": "online", 00:12:54.506 "raid_level": "raid0", 00:12:54.506 "superblock": true, 00:12:54.506 "num_base_bdevs": 4, 00:12:54.506 "num_base_bdevs_discovered": 4, 00:12:54.506 "num_base_bdevs_operational": 4, 00:12:54.506 "base_bdevs_list": [ 00:12:54.506 { 00:12:54.506 "name": "pt1", 00:12:54.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:54.506 "is_configured": true, 00:12:54.506 "data_offset": 2048, 00:12:54.506 "data_size": 63488 00:12:54.506 }, 00:12:54.506 { 00:12:54.506 "name": "pt2", 00:12:54.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.506 "is_configured": true, 00:12:54.506 "data_offset": 2048, 00:12:54.506 "data_size": 63488 00:12:54.506 }, 00:12:54.506 { 00:12:54.506 "name": "pt3", 00:12:54.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:54.506 "is_configured": true, 00:12:54.506 "data_offset": 2048, 00:12:54.506 "data_size": 63488 00:12:54.506 }, 00:12:54.506 { 00:12:54.506 "name": "pt4", 00:12:54.506 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:54.506 "is_configured": true, 00:12:54.506 "data_offset": 2048, 00:12:54.506 "data_size": 63488 00:12:54.506 } 00:12:54.506 ] 00:12:54.506 } 00:12:54.506 } 00:12:54.506 }' 00:12:54.506 10:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.506 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:54.506 pt2 00:12:54.506 pt3 00:12:54.506 pt4' 00:12:54.506 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.764 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.023 [2024-11-15 10:41:25.327402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ddc179a6-749c-48d7-8c91-e96b1f84f5f9 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ddc179a6-749c-48d7-8c91-e96b1f84f5f9 ']' 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.023 [2024-11-15 10:41:25.374987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.023 [2024-11-15 10:41:25.375033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.023 [2024-11-15 10:41:25.375136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.023 [2024-11-15 10:41:25.375234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.023 [2024-11-15 10:41:25.375257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.023 [2024-11-15 10:41:25.527070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:55.023 [2024-11-15 10:41:25.529393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:55.023 [2024-11-15 10:41:25.529459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:55.023 [2024-11-15 10:41:25.529513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:55.023 [2024-11-15 10:41:25.529594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:55.023 [2024-11-15 10:41:25.529661] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:55.023 [2024-11-15 10:41:25.529695] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:55.023 [2024-11-15 10:41:25.529727] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:55.023 [2024-11-15 10:41:25.529749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.023 [2024-11-15 10:41:25.529769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:55.023 request: 00:12:55.023 { 00:12:55.023 "name": "raid_bdev1", 00:12:55.023 "raid_level": "raid0", 00:12:55.023 "base_bdevs": [ 00:12:55.023 "malloc1", 00:12:55.023 "malloc2", 00:12:55.023 "malloc3", 00:12:55.023 "malloc4" 00:12:55.023 ], 00:12:55.023 "strip_size_kb": 64, 00:12:55.023 "superblock": false, 00:12:55.023 "method": "bdev_raid_create", 00:12:55.023 "req_id": 1 00:12:55.023 } 00:12:55.023 Got JSON-RPC error response 00:12:55.023 response: 00:12:55.023 { 00:12:55.023 "code": -17, 00:12:55.023 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:55.023 } 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.023 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.282 [2024-11-15 10:41:25.603075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:55.282 [2024-11-15 10:41:25.603151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.282 [2024-11-15 10:41:25.603192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:55.282 [2024-11-15 10:41:25.603210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.282 [2024-11-15 10:41:25.605866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.282 [2024-11-15 10:41:25.605914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:55.282 [2024-11-15 10:41:25.606026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:55.282 [2024-11-15 10:41:25.606099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:55.282 pt1 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.282 "name": "raid_bdev1", 00:12:55.282 "uuid": "ddc179a6-749c-48d7-8c91-e96b1f84f5f9", 00:12:55.282 "strip_size_kb": 64, 00:12:55.282 "state": "configuring", 00:12:55.282 "raid_level": "raid0", 00:12:55.282 "superblock": true, 00:12:55.282 "num_base_bdevs": 4, 00:12:55.282 "num_base_bdevs_discovered": 1, 00:12:55.282 "num_base_bdevs_operational": 4, 00:12:55.282 "base_bdevs_list": [ 00:12:55.282 { 00:12:55.282 "name": "pt1", 00:12:55.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.282 "is_configured": true, 00:12:55.282 "data_offset": 2048, 00:12:55.282 "data_size": 63488 00:12:55.282 }, 00:12:55.282 { 00:12:55.282 "name": null, 00:12:55.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.282 "is_configured": false, 00:12:55.282 "data_offset": 2048, 00:12:55.282 "data_size": 63488 00:12:55.282 }, 00:12:55.282 { 00:12:55.282 "name": null, 00:12:55.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.282 "is_configured": false, 00:12:55.282 "data_offset": 2048, 00:12:55.282 "data_size": 63488 00:12:55.282 }, 00:12:55.282 { 00:12:55.282 "name": null, 00:12:55.282 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.282 "is_configured": false, 00:12:55.282 "data_offset": 2048, 00:12:55.282 "data_size": 63488 00:12:55.282 } 00:12:55.282 ] 00:12:55.282 }' 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.282 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.540 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:55.540 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.798 [2024-11-15 10:41:26.103206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:55.798 [2024-11-15 10:41:26.103292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.798 [2024-11-15 10:41:26.103320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:55.798 [2024-11-15 10:41:26.103337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.798 [2024-11-15 10:41:26.103883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.798 [2024-11-15 10:41:26.103920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:55.798 [2024-11-15 10:41:26.104018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:55.798 [2024-11-15 10:41:26.104056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:55.798 pt2 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.798 [2024-11-15 10:41:26.111187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.798 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.799 "name": "raid_bdev1", 00:12:55.799 "uuid": "ddc179a6-749c-48d7-8c91-e96b1f84f5f9", 00:12:55.799 "strip_size_kb": 64, 00:12:55.799 "state": "configuring", 00:12:55.799 "raid_level": "raid0", 00:12:55.799 "superblock": true, 00:12:55.799 "num_base_bdevs": 4, 00:12:55.799 "num_base_bdevs_discovered": 1, 00:12:55.799 "num_base_bdevs_operational": 4, 00:12:55.799 "base_bdevs_list": [ 00:12:55.799 { 00:12:55.799 "name": "pt1", 00:12:55.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.799 "is_configured": true, 00:12:55.799 "data_offset": 2048, 00:12:55.799 "data_size": 63488 00:12:55.799 }, 00:12:55.799 { 00:12:55.799 "name": null, 00:12:55.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.799 "is_configured": false, 00:12:55.799 "data_offset": 0, 00:12:55.799 "data_size": 63488 00:12:55.799 }, 00:12:55.799 { 00:12:55.799 "name": null, 00:12:55.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.799 "is_configured": false, 00:12:55.799 "data_offset": 2048, 00:12:55.799 "data_size": 63488 00:12:55.799 }, 00:12:55.799 { 00:12:55.799 "name": null, 00:12:55.799 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.799 "is_configured": false, 00:12:55.799 "data_offset": 2048, 00:12:55.799 "data_size": 63488 00:12:55.799 } 00:12:55.799 ] 00:12:55.799 }' 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.799 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.058 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:56.058 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:56.058 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.058 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.316 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.316 [2024-11-15 10:41:26.619345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.316 [2024-11-15 10:41:26.619432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.316 [2024-11-15 10:41:26.619462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:56.316 [2024-11-15 10:41:26.619476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.317 [2024-11-15 10:41:26.620006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.317 [2024-11-15 10:41:26.620033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.317 [2024-11-15 10:41:26.620133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:56.317 [2024-11-15 10:41:26.620165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.317 pt2 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.317 [2024-11-15 10:41:26.627307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:56.317 [2024-11-15 10:41:26.627390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.317 [2024-11-15 10:41:26.627419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:56.317 [2024-11-15 10:41:26.627432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.317 [2024-11-15 10:41:26.627858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.317 [2024-11-15 10:41:26.627889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:56.317 [2024-11-15 10:41:26.627968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:56.317 [2024-11-15 10:41:26.628003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:56.317 pt3 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.317 [2024-11-15 10:41:26.635280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:56.317 [2024-11-15 10:41:26.635330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.317 [2024-11-15 10:41:26.635367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:56.317 [2024-11-15 10:41:26.635383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.317 [2024-11-15 10:41:26.635836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.317 [2024-11-15 10:41:26.635867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:56.317 [2024-11-15 10:41:26.635945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:56.317 [2024-11-15 10:41:26.635973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:56.317 [2024-11-15 10:41:26.636136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:56.317 [2024-11-15 10:41:26.636151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:56.317 [2024-11-15 10:41:26.636464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:56.317 [2024-11-15 10:41:26.636651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:56.317 [2024-11-15 10:41:26.636672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:56.317 [2024-11-15 10:41:26.636832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.317 pt4 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.317 "name": "raid_bdev1", 00:12:56.317 "uuid": "ddc179a6-749c-48d7-8c91-e96b1f84f5f9", 00:12:56.317 "strip_size_kb": 64, 00:12:56.317 "state": "online", 00:12:56.317 "raid_level": "raid0", 00:12:56.317 "superblock": true, 00:12:56.317 "num_base_bdevs": 4, 00:12:56.317 "num_base_bdevs_discovered": 4, 00:12:56.317 "num_base_bdevs_operational": 4, 00:12:56.317 "base_bdevs_list": [ 00:12:56.317 { 00:12:56.317 "name": "pt1", 00:12:56.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.317 "is_configured": true, 00:12:56.317 "data_offset": 2048, 00:12:56.317 "data_size": 63488 00:12:56.317 }, 00:12:56.317 { 00:12:56.317 "name": "pt2", 00:12:56.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.317 "is_configured": true, 00:12:56.317 "data_offset": 2048, 00:12:56.317 "data_size": 63488 00:12:56.317 }, 00:12:56.317 { 00:12:56.317 "name": "pt3", 00:12:56.317 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.317 "is_configured": true, 00:12:56.317 "data_offset": 2048, 00:12:56.317 "data_size": 63488 00:12:56.317 }, 00:12:56.317 { 00:12:56.317 "name": "pt4", 00:12:56.317 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.317 "is_configured": true, 00:12:56.317 "data_offset": 2048, 00:12:56.317 "data_size": 63488 00:12:56.317 } 00:12:56.317 ] 00:12:56.317 }' 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.317 10:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.885 [2024-11-15 10:41:27.151869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:56.885 "name": "raid_bdev1", 00:12:56.885 "aliases": [ 00:12:56.885 "ddc179a6-749c-48d7-8c91-e96b1f84f5f9" 00:12:56.885 ], 00:12:56.885 "product_name": "Raid Volume", 00:12:56.885 "block_size": 512, 00:12:56.885 "num_blocks": 253952, 00:12:56.885 "uuid": "ddc179a6-749c-48d7-8c91-e96b1f84f5f9", 00:12:56.885 "assigned_rate_limits": { 00:12:56.885 "rw_ios_per_sec": 0, 00:12:56.885 "rw_mbytes_per_sec": 0, 00:12:56.885 "r_mbytes_per_sec": 0, 00:12:56.885 "w_mbytes_per_sec": 0 00:12:56.885 }, 00:12:56.885 "claimed": false, 00:12:56.885 "zoned": false, 00:12:56.885 "supported_io_types": { 00:12:56.885 "read": true, 00:12:56.885 "write": true, 00:12:56.885 "unmap": true, 00:12:56.885 "flush": true, 00:12:56.885 "reset": true, 00:12:56.885 "nvme_admin": false, 00:12:56.885 "nvme_io": false, 00:12:56.885 "nvme_io_md": false, 00:12:56.885 "write_zeroes": true, 00:12:56.885 "zcopy": false, 00:12:56.885 "get_zone_info": false, 00:12:56.885 "zone_management": false, 00:12:56.885 "zone_append": false, 00:12:56.885 "compare": false, 00:12:56.885 "compare_and_write": false, 00:12:56.885 "abort": false, 00:12:56.885 "seek_hole": false, 00:12:56.885 "seek_data": false, 00:12:56.885 "copy": false, 00:12:56.885 "nvme_iov_md": false 00:12:56.885 }, 00:12:56.885 "memory_domains": [ 00:12:56.885 { 00:12:56.885 "dma_device_id": "system", 00:12:56.885 "dma_device_type": 1 00:12:56.885 }, 00:12:56.885 { 00:12:56.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.885 "dma_device_type": 2 00:12:56.885 }, 00:12:56.885 { 00:12:56.885 "dma_device_id": "system", 00:12:56.885 "dma_device_type": 1 00:12:56.885 }, 00:12:56.885 { 00:12:56.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.885 "dma_device_type": 2 00:12:56.885 }, 00:12:56.885 { 00:12:56.885 "dma_device_id": "system", 00:12:56.885 "dma_device_type": 1 00:12:56.885 }, 00:12:56.885 { 00:12:56.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.885 "dma_device_type": 2 00:12:56.885 }, 00:12:56.885 { 00:12:56.885 "dma_device_id": "system", 00:12:56.885 "dma_device_type": 1 00:12:56.885 }, 00:12:56.885 { 00:12:56.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.885 "dma_device_type": 2 00:12:56.885 } 00:12:56.885 ], 00:12:56.885 "driver_specific": { 00:12:56.885 "raid": { 00:12:56.885 "uuid": "ddc179a6-749c-48d7-8c91-e96b1f84f5f9", 00:12:56.885 "strip_size_kb": 64, 00:12:56.885 "state": "online", 00:12:56.885 "raid_level": "raid0", 00:12:56.885 "superblock": true, 00:12:56.885 "num_base_bdevs": 4, 00:12:56.885 "num_base_bdevs_discovered": 4, 00:12:56.885 "num_base_bdevs_operational": 4, 00:12:56.885 "base_bdevs_list": [ 00:12:56.885 { 00:12:56.885 "name": "pt1", 00:12:56.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.885 "is_configured": true, 00:12:56.885 "data_offset": 2048, 00:12:56.885 "data_size": 63488 00:12:56.885 }, 00:12:56.885 { 00:12:56.885 "name": "pt2", 00:12:56.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.885 "is_configured": true, 00:12:56.885 "data_offset": 2048, 00:12:56.885 "data_size": 63488 00:12:56.885 }, 00:12:56.885 { 00:12:56.885 "name": "pt3", 00:12:56.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.885 "is_configured": true, 00:12:56.885 "data_offset": 2048, 00:12:56.885 "data_size": 63488 00:12:56.885 }, 00:12:56.885 { 00:12:56.885 "name": "pt4", 00:12:56.885 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.885 "is_configured": true, 00:12:56.885 "data_offset": 2048, 00:12:56.885 "data_size": 63488 00:12:56.885 } 00:12:56.885 ] 00:12:56.885 } 00:12:56.885 } 00:12:56.885 }' 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:56.885 pt2 00:12:56.885 pt3 00:12:56.885 pt4' 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.885 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.886 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.886 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.886 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:56.886 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.886 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.886 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.886 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.886 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.886 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.144 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.144 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:57.144 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.144 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.144 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.145 [2024-11-15 10:41:27.495929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ddc179a6-749c-48d7-8c91-e96b1f84f5f9 '!=' ddc179a6-749c-48d7-8c91-e96b1f84f5f9 ']' 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71015 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 71015 ']' 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 71015 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71015 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71015' 00:12:57.145 killing process with pid 71015 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 71015 00:12:57.145 [2024-11-15 10:41:27.578570] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.145 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 71015 00:12:57.145 [2024-11-15 10:41:27.578681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.145 [2024-11-15 10:41:27.578780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.145 [2024-11-15 10:41:27.578800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:57.403 [2024-11-15 10:41:27.914040] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.779 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:58.779 00:12:58.779 real 0m5.834s 00:12:58.779 user 0m8.893s 00:12:58.779 sys 0m0.814s 00:12:58.779 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:58.779 ************************************ 00:12:58.779 END TEST raid_superblock_test 00:12:58.779 ************************************ 00:12:58.779 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.779 10:41:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:58.779 10:41:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:58.779 10:41:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:58.779 10:41:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.779 ************************************ 00:12:58.779 START TEST raid_read_error_test 00:12:58.779 ************************************ 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1AOh6igH1t 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71280 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71280 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71280 ']' 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:58.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:58.779 10:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.779 [2024-11-15 10:41:29.072180] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:12:58.779 [2024-11-15 10:41:29.072366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71280 ] 00:12:58.779 [2024-11-15 10:41:29.254377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.038 [2024-11-15 10:41:29.357034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.038 [2024-11-15 10:41:29.536137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.038 [2024-11-15 10:41:29.536207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.604 BaseBdev1_malloc 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.604 true 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.604 [2024-11-15 10:41:30.093449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:59.604 [2024-11-15 10:41:30.093517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.604 [2024-11-15 10:41:30.093547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:59.604 [2024-11-15 10:41:30.093566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.604 [2024-11-15 10:41:30.096153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.604 [2024-11-15 10:41:30.096207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:59.604 BaseBdev1 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.604 BaseBdev2_malloc 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.604 true 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.604 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.604 [2024-11-15 10:41:30.148803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:59.604 [2024-11-15 10:41:30.148871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.605 [2024-11-15 10:41:30.148898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:59.605 [2024-11-15 10:41:30.148916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.605 [2024-11-15 10:41:30.151518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.605 [2024-11-15 10:41:30.151570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:59.605 BaseBdev2 00:12:59.605 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.605 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:59.605 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:59.605 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.605 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.863 BaseBdev3_malloc 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.863 true 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.863 [2024-11-15 10:41:30.215513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:59.863 [2024-11-15 10:41:30.215582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.863 [2024-11-15 10:41:30.215610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:59.863 [2024-11-15 10:41:30.215628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.863 [2024-11-15 10:41:30.218196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.863 [2024-11-15 10:41:30.218247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:59.863 BaseBdev3 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.863 BaseBdev4_malloc 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.863 true 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.863 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.863 [2024-11-15 10:41:30.271132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:59.863 [2024-11-15 10:41:30.271202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.864 [2024-11-15 10:41:30.271230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:59.864 [2024-11-15 10:41:30.271249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.864 [2024-11-15 10:41:30.273803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.864 [2024-11-15 10:41:30.273858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:59.864 BaseBdev4 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.864 [2024-11-15 10:41:30.279215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.864 [2024-11-15 10:41:30.281474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.864 [2024-11-15 10:41:30.281582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.864 [2024-11-15 10:41:30.281677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:59.864 [2024-11-15 10:41:30.281970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:59.864 [2024-11-15 10:41:30.282009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:59.864 [2024-11-15 10:41:30.282319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:59.864 [2024-11-15 10:41:30.282560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:59.864 [2024-11-15 10:41:30.282588] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:59.864 [2024-11-15 10:41:30.282787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.864 "name": "raid_bdev1", 00:12:59.864 "uuid": "a761d19f-ef22-40cd-8de9-15b144c9ef43", 00:12:59.864 "strip_size_kb": 64, 00:12:59.864 "state": "online", 00:12:59.864 "raid_level": "raid0", 00:12:59.864 "superblock": true, 00:12:59.864 "num_base_bdevs": 4, 00:12:59.864 "num_base_bdevs_discovered": 4, 00:12:59.864 "num_base_bdevs_operational": 4, 00:12:59.864 "base_bdevs_list": [ 00:12:59.864 { 00:12:59.864 "name": "BaseBdev1", 00:12:59.864 "uuid": "cc52847a-7876-502e-a7f3-a32a026b34d0", 00:12:59.864 "is_configured": true, 00:12:59.864 "data_offset": 2048, 00:12:59.864 "data_size": 63488 00:12:59.864 }, 00:12:59.864 { 00:12:59.864 "name": "BaseBdev2", 00:12:59.864 "uuid": "864b4f9c-9c41-5028-9add-1461abf83d55", 00:12:59.864 "is_configured": true, 00:12:59.864 "data_offset": 2048, 00:12:59.864 "data_size": 63488 00:12:59.864 }, 00:12:59.864 { 00:12:59.864 "name": "BaseBdev3", 00:12:59.864 "uuid": "7da852ac-6e2b-54cb-98c5-4ab055babdef", 00:12:59.864 "is_configured": true, 00:12:59.864 "data_offset": 2048, 00:12:59.864 "data_size": 63488 00:12:59.864 }, 00:12:59.864 { 00:12:59.864 "name": "BaseBdev4", 00:12:59.864 "uuid": "99b1bbf8-012c-559b-bb70-4dd94139ead8", 00:12:59.864 "is_configured": true, 00:12:59.864 "data_offset": 2048, 00:12:59.864 "data_size": 63488 00:12:59.864 } 00:12:59.864 ] 00:12:59.864 }' 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.864 10:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.430 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:00.430 10:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:00.430 [2024-11-15 10:41:30.920672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.364 "name": "raid_bdev1", 00:13:01.364 "uuid": "a761d19f-ef22-40cd-8de9-15b144c9ef43", 00:13:01.364 "strip_size_kb": 64, 00:13:01.364 "state": "online", 00:13:01.364 "raid_level": "raid0", 00:13:01.364 "superblock": true, 00:13:01.364 "num_base_bdevs": 4, 00:13:01.364 "num_base_bdevs_discovered": 4, 00:13:01.364 "num_base_bdevs_operational": 4, 00:13:01.364 "base_bdevs_list": [ 00:13:01.364 { 00:13:01.364 "name": "BaseBdev1", 00:13:01.364 "uuid": "cc52847a-7876-502e-a7f3-a32a026b34d0", 00:13:01.364 "is_configured": true, 00:13:01.364 "data_offset": 2048, 00:13:01.364 "data_size": 63488 00:13:01.364 }, 00:13:01.364 { 00:13:01.364 "name": "BaseBdev2", 00:13:01.364 "uuid": "864b4f9c-9c41-5028-9add-1461abf83d55", 00:13:01.364 "is_configured": true, 00:13:01.364 "data_offset": 2048, 00:13:01.364 "data_size": 63488 00:13:01.364 }, 00:13:01.364 { 00:13:01.364 "name": "BaseBdev3", 00:13:01.364 "uuid": "7da852ac-6e2b-54cb-98c5-4ab055babdef", 00:13:01.364 "is_configured": true, 00:13:01.364 "data_offset": 2048, 00:13:01.364 "data_size": 63488 00:13:01.364 }, 00:13:01.364 { 00:13:01.364 "name": "BaseBdev4", 00:13:01.364 "uuid": "99b1bbf8-012c-559b-bb70-4dd94139ead8", 00:13:01.364 "is_configured": true, 00:13:01.364 "data_offset": 2048, 00:13:01.364 "data_size": 63488 00:13:01.364 } 00:13:01.364 ] 00:13:01.364 }' 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.364 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.931 [2024-11-15 10:41:32.327325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.931 [2024-11-15 10:41:32.327377] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.931 [2024-11-15 10:41:32.330850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.931 [2024-11-15 10:41:32.330928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.931 [2024-11-15 10:41:32.330998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.931 [2024-11-15 10:41:32.331018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:01.931 { 00:13:01.931 "results": [ 00:13:01.931 { 00:13:01.931 "job": "raid_bdev1", 00:13:01.931 "core_mask": "0x1", 00:13:01.931 "workload": "randrw", 00:13:01.931 "percentage": 50, 00:13:01.931 "status": "finished", 00:13:01.931 "queue_depth": 1, 00:13:01.931 "io_size": 131072, 00:13:01.931 "runtime": 1.404538, 00:13:01.931 "iops": 11197.988235277366, 00:13:01.931 "mibps": 1399.7485294096707, 00:13:01.931 "io_failed": 1, 00:13:01.931 "io_timeout": 0, 00:13:01.931 "avg_latency_us": 122.7427073327207, 00:13:01.931 "min_latency_us": 42.589090909090906, 00:13:01.931 "max_latency_us": 1899.0545454545454 00:13:01.931 } 00:13:01.931 ], 00:13:01.931 "core_count": 1 00:13:01.931 } 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71280 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71280 ']' 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71280 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71280 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:01.931 killing process with pid 71280 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71280' 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71280 00:13:01.931 [2024-11-15 10:41:32.364310] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.931 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71280 00:13:02.192 [2024-11-15 10:41:32.641064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.130 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1AOh6igH1t 00:13:03.130 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:03.130 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:03.130 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:03.130 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:03.130 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:03.130 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:03.130 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:03.130 00:13:03.130 real 0m4.718s 00:13:03.130 user 0m5.890s 00:13:03.130 sys 0m0.510s 00:13:03.130 10:41:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.130 ************************************ 00:13:03.130 END TEST raid_read_error_test 00:13:03.130 ************************************ 00:13:03.130 10:41:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.389 10:41:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:03.389 10:41:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:03.389 10:41:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.389 10:41:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.389 ************************************ 00:13:03.389 START TEST raid_write_error_test 00:13:03.389 ************************************ 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bHBO7Qxtim 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71426 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71426 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71426 ']' 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:03.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:03.389 10:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.389 [2024-11-15 10:41:33.846688] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:13:03.389 [2024-11-15 10:41:33.846871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71426 ] 00:13:03.648 [2024-11-15 10:41:34.027016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.648 [2024-11-15 10:41:34.138419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.907 [2024-11-15 10:41:34.326921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.907 [2024-11-15 10:41:34.327039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.476 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.476 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:04.476 10:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.476 10:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.476 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.476 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.476 BaseBdev1_malloc 00:13:04.476 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.476 10:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:04.476 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.477 true 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.477 [2024-11-15 10:41:34.929340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:04.477 [2024-11-15 10:41:34.929443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.477 [2024-11-15 10:41:34.929490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:04.477 [2024-11-15 10:41:34.929518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.477 [2024-11-15 10:41:34.932644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.477 [2024-11-15 10:41:34.932704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.477 BaseBdev1 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.477 BaseBdev2_malloc 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.477 true 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.477 [2024-11-15 10:41:34.989218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:04.477 [2024-11-15 10:41:34.989296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.477 [2024-11-15 10:41:34.989335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:04.477 [2024-11-15 10:41:34.989383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.477 [2024-11-15 10:41:34.992199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.477 [2024-11-15 10:41:34.992262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:04.477 BaseBdev2 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.477 10:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.736 BaseBdev3_malloc 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.736 true 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.736 [2024-11-15 10:41:35.059390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:04.736 [2024-11-15 10:41:35.059465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.736 [2024-11-15 10:41:35.059506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:04.736 [2024-11-15 10:41:35.059539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.736 [2024-11-15 10:41:35.062406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.736 [2024-11-15 10:41:35.062465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:04.736 BaseBdev3 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.736 BaseBdev4_malloc 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.736 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.736 true 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.737 [2024-11-15 10:41:35.119155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:04.737 [2024-11-15 10:41:35.119234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.737 [2024-11-15 10:41:35.119279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:04.737 [2024-11-15 10:41:35.119308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.737 [2024-11-15 10:41:35.122442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.737 [2024-11-15 10:41:35.122503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:04.737 BaseBdev4 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.737 [2024-11-15 10:41:35.131394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.737 [2024-11-15 10:41:35.133782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.737 [2024-11-15 10:41:35.133898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.737 [2024-11-15 10:41:35.134000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:04.737 [2024-11-15 10:41:35.134445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:04.737 [2024-11-15 10:41:35.134487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:04.737 [2024-11-15 10:41:35.134889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:04.737 [2024-11-15 10:41:35.135165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:04.737 [2024-11-15 10:41:35.135211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:04.737 [2024-11-15 10:41:35.135578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.737 "name": "raid_bdev1", 00:13:04.737 "uuid": "3245cdee-4a49-460e-ae68-e36206ce8a3e", 00:13:04.737 "strip_size_kb": 64, 00:13:04.737 "state": "online", 00:13:04.737 "raid_level": "raid0", 00:13:04.737 "superblock": true, 00:13:04.737 "num_base_bdevs": 4, 00:13:04.737 "num_base_bdevs_discovered": 4, 00:13:04.737 "num_base_bdevs_operational": 4, 00:13:04.737 "base_bdevs_list": [ 00:13:04.737 { 00:13:04.737 "name": "BaseBdev1", 00:13:04.737 "uuid": "7a428077-2441-56eb-b530-8fbdcdfc6306", 00:13:04.737 "is_configured": true, 00:13:04.737 "data_offset": 2048, 00:13:04.737 "data_size": 63488 00:13:04.737 }, 00:13:04.737 { 00:13:04.737 "name": "BaseBdev2", 00:13:04.737 "uuid": "55b2f5b5-433f-528a-891a-abf0c31ce1cb", 00:13:04.737 "is_configured": true, 00:13:04.737 "data_offset": 2048, 00:13:04.737 "data_size": 63488 00:13:04.737 }, 00:13:04.737 { 00:13:04.737 "name": "BaseBdev3", 00:13:04.737 "uuid": "79d8bb25-cc89-5baa-9350-a80066235b02", 00:13:04.737 "is_configured": true, 00:13:04.737 "data_offset": 2048, 00:13:04.737 "data_size": 63488 00:13:04.737 }, 00:13:04.737 { 00:13:04.737 "name": "BaseBdev4", 00:13:04.737 "uuid": "ee16f419-b82a-5778-a7bf-83259223cb69", 00:13:04.737 "is_configured": true, 00:13:04.737 "data_offset": 2048, 00:13:04.737 "data_size": 63488 00:13:04.737 } 00:13:04.737 ] 00:13:04.737 }' 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.737 10:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.305 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:05.305 10:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.305 [2024-11-15 10:41:35.797072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.244 "name": "raid_bdev1", 00:13:06.244 "uuid": "3245cdee-4a49-460e-ae68-e36206ce8a3e", 00:13:06.244 "strip_size_kb": 64, 00:13:06.244 "state": "online", 00:13:06.244 "raid_level": "raid0", 00:13:06.244 "superblock": true, 00:13:06.244 "num_base_bdevs": 4, 00:13:06.244 "num_base_bdevs_discovered": 4, 00:13:06.244 "num_base_bdevs_operational": 4, 00:13:06.244 "base_bdevs_list": [ 00:13:06.244 { 00:13:06.244 "name": "BaseBdev1", 00:13:06.244 "uuid": "7a428077-2441-56eb-b530-8fbdcdfc6306", 00:13:06.244 "is_configured": true, 00:13:06.244 "data_offset": 2048, 00:13:06.244 "data_size": 63488 00:13:06.244 }, 00:13:06.244 { 00:13:06.244 "name": "BaseBdev2", 00:13:06.244 "uuid": "55b2f5b5-433f-528a-891a-abf0c31ce1cb", 00:13:06.244 "is_configured": true, 00:13:06.244 "data_offset": 2048, 00:13:06.244 "data_size": 63488 00:13:06.244 }, 00:13:06.244 { 00:13:06.244 "name": "BaseBdev3", 00:13:06.244 "uuid": "79d8bb25-cc89-5baa-9350-a80066235b02", 00:13:06.244 "is_configured": true, 00:13:06.244 "data_offset": 2048, 00:13:06.244 "data_size": 63488 00:13:06.244 }, 00:13:06.244 { 00:13:06.244 "name": "BaseBdev4", 00:13:06.244 "uuid": "ee16f419-b82a-5778-a7bf-83259223cb69", 00:13:06.244 "is_configured": true, 00:13:06.244 "data_offset": 2048, 00:13:06.244 "data_size": 63488 00:13:06.244 } 00:13:06.244 ] 00:13:06.244 }' 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.244 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.812 [2024-11-15 10:41:37.228148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.812 [2024-11-15 10:41:37.228204] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.812 [2024-11-15 10:41:37.231958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.812 [2024-11-15 10:41:37.232046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.812 [2024-11-15 10:41:37.232101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.812 [2024-11-15 10:41:37.232135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:06.812 { 00:13:06.812 "results": [ 00:13:06.812 { 00:13:06.812 "job": "raid_bdev1", 00:13:06.812 "core_mask": "0x1", 00:13:06.812 "workload": "randrw", 00:13:06.812 "percentage": 50, 00:13:06.812 "status": "finished", 00:13:06.812 "queue_depth": 1, 00:13:06.812 "io_size": 131072, 00:13:06.812 "runtime": 1.428846, 00:13:06.812 "iops": 10966.192297840355, 00:13:06.812 "mibps": 1370.7740372300443, 00:13:06.812 "io_failed": 1, 00:13:06.812 "io_timeout": 0, 00:13:06.812 "avg_latency_us": 125.38605836282416, 00:13:06.812 "min_latency_us": 38.63272727272727, 00:13:06.812 "max_latency_us": 2040.5527272727272 00:13:06.812 } 00:13:06.812 ], 00:13:06.812 "core_count": 1 00:13:06.812 } 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71426 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71426 ']' 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71426 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71426 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:06.812 killing process with pid 71426 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71426' 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71426 00:13:06.812 [2024-11-15 10:41:37.270190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.812 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71426 00:13:07.071 [2024-11-15 10:41:37.534470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.451 10:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bHBO7Qxtim 00:13:08.451 10:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:08.451 10:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:08.451 10:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:08.451 10:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:08.451 10:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.451 10:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:08.451 10:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:08.451 00:13:08.451 real 0m4.852s 00:13:08.451 user 0m6.157s 00:13:08.451 sys 0m0.504s 00:13:08.451 10:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:08.451 10:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.451 ************************************ 00:13:08.451 END TEST raid_write_error_test 00:13:08.451 ************************************ 00:13:08.451 10:41:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:08.451 10:41:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:08.451 10:41:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:08.451 10:41:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:08.451 10:41:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.451 ************************************ 00:13:08.451 START TEST raid_state_function_test 00:13:08.451 ************************************ 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71570 00:13:08.451 Process raid pid: 71570 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71570' 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71570 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71570 ']' 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:08.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:08.451 10:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.451 [2024-11-15 10:41:38.777259] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:13:08.451 [2024-11-15 10:41:38.777455] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.451 [2024-11-15 10:41:38.958153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.710 [2024-11-15 10:41:39.065859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.710 [2024-11-15 10:41:39.253040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.710 [2024-11-15 10:41:39.253084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.641 [2024-11-15 10:41:39.837929] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.641 [2024-11-15 10:41:39.837997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.641 [2024-11-15 10:41:39.838014] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.641 [2024-11-15 10:41:39.838030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.641 [2024-11-15 10:41:39.838040] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.641 [2024-11-15 10:41:39.838054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.641 [2024-11-15 10:41:39.838064] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:09.641 [2024-11-15 10:41:39.838077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.641 "name": "Existed_Raid", 00:13:09.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.641 "strip_size_kb": 64, 00:13:09.641 "state": "configuring", 00:13:09.641 "raid_level": "concat", 00:13:09.641 "superblock": false, 00:13:09.641 "num_base_bdevs": 4, 00:13:09.641 "num_base_bdevs_discovered": 0, 00:13:09.641 "num_base_bdevs_operational": 4, 00:13:09.641 "base_bdevs_list": [ 00:13:09.641 { 00:13:09.641 "name": "BaseBdev1", 00:13:09.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.641 "is_configured": false, 00:13:09.641 "data_offset": 0, 00:13:09.641 "data_size": 0 00:13:09.641 }, 00:13:09.641 { 00:13:09.641 "name": "BaseBdev2", 00:13:09.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.641 "is_configured": false, 00:13:09.641 "data_offset": 0, 00:13:09.641 "data_size": 0 00:13:09.641 }, 00:13:09.641 { 00:13:09.641 "name": "BaseBdev3", 00:13:09.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.641 "is_configured": false, 00:13:09.641 "data_offset": 0, 00:13:09.641 "data_size": 0 00:13:09.641 }, 00:13:09.641 { 00:13:09.641 "name": "BaseBdev4", 00:13:09.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.641 "is_configured": false, 00:13:09.641 "data_offset": 0, 00:13:09.641 "data_size": 0 00:13:09.641 } 00:13:09.641 ] 00:13:09.641 }' 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.641 10:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.899 [2024-11-15 10:41:40.390025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:09.899 [2024-11-15 10:41:40.390074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.899 [2024-11-15 10:41:40.398033] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.899 [2024-11-15 10:41:40.398098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.899 [2024-11-15 10:41:40.398113] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.899 [2024-11-15 10:41:40.398129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.899 [2024-11-15 10:41:40.398139] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.899 [2024-11-15 10:41:40.398153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.899 [2024-11-15 10:41:40.398162] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:09.899 [2024-11-15 10:41:40.398176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.899 [2024-11-15 10:41:40.437770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.899 BaseBdev1 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.899 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.157 [ 00:13:10.157 { 00:13:10.157 "name": "BaseBdev1", 00:13:10.157 "aliases": [ 00:13:10.157 "dd40c4a2-9d4b-4e1f-b21c-b3059d3b203a" 00:13:10.157 ], 00:13:10.157 "product_name": "Malloc disk", 00:13:10.157 "block_size": 512, 00:13:10.157 "num_blocks": 65536, 00:13:10.157 "uuid": "dd40c4a2-9d4b-4e1f-b21c-b3059d3b203a", 00:13:10.157 "assigned_rate_limits": { 00:13:10.157 "rw_ios_per_sec": 0, 00:13:10.157 "rw_mbytes_per_sec": 0, 00:13:10.157 "r_mbytes_per_sec": 0, 00:13:10.157 "w_mbytes_per_sec": 0 00:13:10.157 }, 00:13:10.157 "claimed": true, 00:13:10.157 "claim_type": "exclusive_write", 00:13:10.157 "zoned": false, 00:13:10.157 "supported_io_types": { 00:13:10.157 "read": true, 00:13:10.157 "write": true, 00:13:10.157 "unmap": true, 00:13:10.157 "flush": true, 00:13:10.157 "reset": true, 00:13:10.157 "nvme_admin": false, 00:13:10.157 "nvme_io": false, 00:13:10.157 "nvme_io_md": false, 00:13:10.157 "write_zeroes": true, 00:13:10.157 "zcopy": true, 00:13:10.157 "get_zone_info": false, 00:13:10.157 "zone_management": false, 00:13:10.157 "zone_append": false, 00:13:10.157 "compare": false, 00:13:10.157 "compare_and_write": false, 00:13:10.157 "abort": true, 00:13:10.157 "seek_hole": false, 00:13:10.157 "seek_data": false, 00:13:10.157 "copy": true, 00:13:10.157 "nvme_iov_md": false 00:13:10.157 }, 00:13:10.157 "memory_domains": [ 00:13:10.157 { 00:13:10.157 "dma_device_id": "system", 00:13:10.157 "dma_device_type": 1 00:13:10.157 }, 00:13:10.157 { 00:13:10.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.157 "dma_device_type": 2 00:13:10.157 } 00:13:10.157 ], 00:13:10.157 "driver_specific": {} 00:13:10.157 } 00:13:10.157 ] 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.157 "name": "Existed_Raid", 00:13:10.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.157 "strip_size_kb": 64, 00:13:10.157 "state": "configuring", 00:13:10.157 "raid_level": "concat", 00:13:10.157 "superblock": false, 00:13:10.157 "num_base_bdevs": 4, 00:13:10.157 "num_base_bdevs_discovered": 1, 00:13:10.157 "num_base_bdevs_operational": 4, 00:13:10.157 "base_bdevs_list": [ 00:13:10.157 { 00:13:10.157 "name": "BaseBdev1", 00:13:10.157 "uuid": "dd40c4a2-9d4b-4e1f-b21c-b3059d3b203a", 00:13:10.157 "is_configured": true, 00:13:10.157 "data_offset": 0, 00:13:10.157 "data_size": 65536 00:13:10.157 }, 00:13:10.157 { 00:13:10.157 "name": "BaseBdev2", 00:13:10.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.157 "is_configured": false, 00:13:10.157 "data_offset": 0, 00:13:10.157 "data_size": 0 00:13:10.157 }, 00:13:10.157 { 00:13:10.157 "name": "BaseBdev3", 00:13:10.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.157 "is_configured": false, 00:13:10.157 "data_offset": 0, 00:13:10.157 "data_size": 0 00:13:10.157 }, 00:13:10.157 { 00:13:10.157 "name": "BaseBdev4", 00:13:10.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.157 "is_configured": false, 00:13:10.157 "data_offset": 0, 00:13:10.157 "data_size": 0 00:13:10.157 } 00:13:10.157 ] 00:13:10.157 }' 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.157 10:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.724 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.724 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.724 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.724 [2024-11-15 10:41:41.009966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.724 [2024-11-15 10:41:41.010033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:10.724 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.724 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:10.724 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 [2024-11-15 10:41:41.018012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.725 [2024-11-15 10:41:41.020293] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.725 [2024-11-15 10:41:41.020362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.725 [2024-11-15 10:41:41.020379] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.725 [2024-11-15 10:41:41.020398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.725 [2024-11-15 10:41:41.020408] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:10.725 [2024-11-15 10:41:41.020423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.725 "name": "Existed_Raid", 00:13:10.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.725 "strip_size_kb": 64, 00:13:10.725 "state": "configuring", 00:13:10.725 "raid_level": "concat", 00:13:10.725 "superblock": false, 00:13:10.725 "num_base_bdevs": 4, 00:13:10.725 "num_base_bdevs_discovered": 1, 00:13:10.725 "num_base_bdevs_operational": 4, 00:13:10.725 "base_bdevs_list": [ 00:13:10.725 { 00:13:10.725 "name": "BaseBdev1", 00:13:10.725 "uuid": "dd40c4a2-9d4b-4e1f-b21c-b3059d3b203a", 00:13:10.725 "is_configured": true, 00:13:10.725 "data_offset": 0, 00:13:10.725 "data_size": 65536 00:13:10.725 }, 00:13:10.725 { 00:13:10.725 "name": "BaseBdev2", 00:13:10.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.725 "is_configured": false, 00:13:10.725 "data_offset": 0, 00:13:10.725 "data_size": 0 00:13:10.725 }, 00:13:10.725 { 00:13:10.725 "name": "BaseBdev3", 00:13:10.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.725 "is_configured": false, 00:13:10.725 "data_offset": 0, 00:13:10.725 "data_size": 0 00:13:10.725 }, 00:13:10.725 { 00:13:10.725 "name": "BaseBdev4", 00:13:10.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.725 "is_configured": false, 00:13:10.725 "data_offset": 0, 00:13:10.725 "data_size": 0 00:13:10.725 } 00:13:10.725 ] 00:13:10.725 }' 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.725 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.983 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:10.983 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.983 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.242 [2024-11-15 10:41:41.564156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.242 BaseBdev2 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.242 [ 00:13:11.242 { 00:13:11.242 "name": "BaseBdev2", 00:13:11.242 "aliases": [ 00:13:11.242 "58c0af58-ed15-46fb-bee6-1e698883be23" 00:13:11.242 ], 00:13:11.242 "product_name": "Malloc disk", 00:13:11.242 "block_size": 512, 00:13:11.242 "num_blocks": 65536, 00:13:11.242 "uuid": "58c0af58-ed15-46fb-bee6-1e698883be23", 00:13:11.242 "assigned_rate_limits": { 00:13:11.242 "rw_ios_per_sec": 0, 00:13:11.242 "rw_mbytes_per_sec": 0, 00:13:11.242 "r_mbytes_per_sec": 0, 00:13:11.242 "w_mbytes_per_sec": 0 00:13:11.242 }, 00:13:11.242 "claimed": true, 00:13:11.242 "claim_type": "exclusive_write", 00:13:11.242 "zoned": false, 00:13:11.242 "supported_io_types": { 00:13:11.242 "read": true, 00:13:11.242 "write": true, 00:13:11.242 "unmap": true, 00:13:11.242 "flush": true, 00:13:11.242 "reset": true, 00:13:11.242 "nvme_admin": false, 00:13:11.242 "nvme_io": false, 00:13:11.242 "nvme_io_md": false, 00:13:11.242 "write_zeroes": true, 00:13:11.242 "zcopy": true, 00:13:11.242 "get_zone_info": false, 00:13:11.242 "zone_management": false, 00:13:11.242 "zone_append": false, 00:13:11.242 "compare": false, 00:13:11.242 "compare_and_write": false, 00:13:11.242 "abort": true, 00:13:11.242 "seek_hole": false, 00:13:11.242 "seek_data": false, 00:13:11.242 "copy": true, 00:13:11.242 "nvme_iov_md": false 00:13:11.242 }, 00:13:11.242 "memory_domains": [ 00:13:11.242 { 00:13:11.242 "dma_device_id": "system", 00:13:11.242 "dma_device_type": 1 00:13:11.242 }, 00:13:11.242 { 00:13:11.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.242 "dma_device_type": 2 00:13:11.242 } 00:13:11.242 ], 00:13:11.242 "driver_specific": {} 00:13:11.242 } 00:13:11.242 ] 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.242 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.243 "name": "Existed_Raid", 00:13:11.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.243 "strip_size_kb": 64, 00:13:11.243 "state": "configuring", 00:13:11.243 "raid_level": "concat", 00:13:11.243 "superblock": false, 00:13:11.243 "num_base_bdevs": 4, 00:13:11.243 "num_base_bdevs_discovered": 2, 00:13:11.243 "num_base_bdevs_operational": 4, 00:13:11.243 "base_bdevs_list": [ 00:13:11.243 { 00:13:11.243 "name": "BaseBdev1", 00:13:11.243 "uuid": "dd40c4a2-9d4b-4e1f-b21c-b3059d3b203a", 00:13:11.243 "is_configured": true, 00:13:11.243 "data_offset": 0, 00:13:11.243 "data_size": 65536 00:13:11.243 }, 00:13:11.243 { 00:13:11.243 "name": "BaseBdev2", 00:13:11.243 "uuid": "58c0af58-ed15-46fb-bee6-1e698883be23", 00:13:11.243 "is_configured": true, 00:13:11.243 "data_offset": 0, 00:13:11.243 "data_size": 65536 00:13:11.243 }, 00:13:11.243 { 00:13:11.243 "name": "BaseBdev3", 00:13:11.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.243 "is_configured": false, 00:13:11.243 "data_offset": 0, 00:13:11.243 "data_size": 0 00:13:11.243 }, 00:13:11.243 { 00:13:11.243 "name": "BaseBdev4", 00:13:11.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.243 "is_configured": false, 00:13:11.243 "data_offset": 0, 00:13:11.243 "data_size": 0 00:13:11.243 } 00:13:11.243 ] 00:13:11.243 }' 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.243 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.810 [2024-11-15 10:41:42.157778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.810 BaseBdev3 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.810 [ 00:13:11.810 { 00:13:11.810 "name": "BaseBdev3", 00:13:11.810 "aliases": [ 00:13:11.810 "3362cf7e-af86-4837-987a-4d14af466b88" 00:13:11.810 ], 00:13:11.810 "product_name": "Malloc disk", 00:13:11.810 "block_size": 512, 00:13:11.810 "num_blocks": 65536, 00:13:11.810 "uuid": "3362cf7e-af86-4837-987a-4d14af466b88", 00:13:11.810 "assigned_rate_limits": { 00:13:11.810 "rw_ios_per_sec": 0, 00:13:11.810 "rw_mbytes_per_sec": 0, 00:13:11.810 "r_mbytes_per_sec": 0, 00:13:11.810 "w_mbytes_per_sec": 0 00:13:11.810 }, 00:13:11.810 "claimed": true, 00:13:11.810 "claim_type": "exclusive_write", 00:13:11.810 "zoned": false, 00:13:11.810 "supported_io_types": { 00:13:11.810 "read": true, 00:13:11.810 "write": true, 00:13:11.810 "unmap": true, 00:13:11.810 "flush": true, 00:13:11.810 "reset": true, 00:13:11.810 "nvme_admin": false, 00:13:11.810 "nvme_io": false, 00:13:11.810 "nvme_io_md": false, 00:13:11.810 "write_zeroes": true, 00:13:11.810 "zcopy": true, 00:13:11.810 "get_zone_info": false, 00:13:11.810 "zone_management": false, 00:13:11.810 "zone_append": false, 00:13:11.810 "compare": false, 00:13:11.810 "compare_and_write": false, 00:13:11.810 "abort": true, 00:13:11.810 "seek_hole": false, 00:13:11.810 "seek_data": false, 00:13:11.810 "copy": true, 00:13:11.810 "nvme_iov_md": false 00:13:11.810 }, 00:13:11.810 "memory_domains": [ 00:13:11.810 { 00:13:11.810 "dma_device_id": "system", 00:13:11.810 "dma_device_type": 1 00:13:11.810 }, 00:13:11.810 { 00:13:11.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.810 "dma_device_type": 2 00:13:11.810 } 00:13:11.810 ], 00:13:11.810 "driver_specific": {} 00:13:11.810 } 00:13:11.810 ] 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.810 "name": "Existed_Raid", 00:13:11.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.810 "strip_size_kb": 64, 00:13:11.810 "state": "configuring", 00:13:11.810 "raid_level": "concat", 00:13:11.810 "superblock": false, 00:13:11.810 "num_base_bdevs": 4, 00:13:11.810 "num_base_bdevs_discovered": 3, 00:13:11.810 "num_base_bdevs_operational": 4, 00:13:11.810 "base_bdevs_list": [ 00:13:11.810 { 00:13:11.810 "name": "BaseBdev1", 00:13:11.810 "uuid": "dd40c4a2-9d4b-4e1f-b21c-b3059d3b203a", 00:13:11.810 "is_configured": true, 00:13:11.810 "data_offset": 0, 00:13:11.810 "data_size": 65536 00:13:11.810 }, 00:13:11.810 { 00:13:11.810 "name": "BaseBdev2", 00:13:11.810 "uuid": "58c0af58-ed15-46fb-bee6-1e698883be23", 00:13:11.810 "is_configured": true, 00:13:11.810 "data_offset": 0, 00:13:11.810 "data_size": 65536 00:13:11.810 }, 00:13:11.810 { 00:13:11.810 "name": "BaseBdev3", 00:13:11.810 "uuid": "3362cf7e-af86-4837-987a-4d14af466b88", 00:13:11.810 "is_configured": true, 00:13:11.810 "data_offset": 0, 00:13:11.810 "data_size": 65536 00:13:11.810 }, 00:13:11.810 { 00:13:11.810 "name": "BaseBdev4", 00:13:11.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.810 "is_configured": false, 00:13:11.810 "data_offset": 0, 00:13:11.810 "data_size": 0 00:13:11.810 } 00:13:11.810 ] 00:13:11.810 }' 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.810 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.377 [2024-11-15 10:41:42.735825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:12.377 [2024-11-15 10:41:42.735888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:12.377 [2024-11-15 10:41:42.735902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:12.377 [2024-11-15 10:41:42.736246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:12.377 [2024-11-15 10:41:42.736493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:12.377 [2024-11-15 10:41:42.736527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:12.377 [2024-11-15 10:41:42.736831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.377 BaseBdev4 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.377 [ 00:13:12.377 { 00:13:12.377 "name": "BaseBdev4", 00:13:12.377 "aliases": [ 00:13:12.377 "79d02dbc-1a56-46ae-9d66-87312e380ec9" 00:13:12.377 ], 00:13:12.377 "product_name": "Malloc disk", 00:13:12.377 "block_size": 512, 00:13:12.377 "num_blocks": 65536, 00:13:12.377 "uuid": "79d02dbc-1a56-46ae-9d66-87312e380ec9", 00:13:12.377 "assigned_rate_limits": { 00:13:12.377 "rw_ios_per_sec": 0, 00:13:12.377 "rw_mbytes_per_sec": 0, 00:13:12.377 "r_mbytes_per_sec": 0, 00:13:12.377 "w_mbytes_per_sec": 0 00:13:12.377 }, 00:13:12.377 "claimed": true, 00:13:12.377 "claim_type": "exclusive_write", 00:13:12.377 "zoned": false, 00:13:12.377 "supported_io_types": { 00:13:12.377 "read": true, 00:13:12.377 "write": true, 00:13:12.377 "unmap": true, 00:13:12.377 "flush": true, 00:13:12.377 "reset": true, 00:13:12.377 "nvme_admin": false, 00:13:12.377 "nvme_io": false, 00:13:12.377 "nvme_io_md": false, 00:13:12.377 "write_zeroes": true, 00:13:12.377 "zcopy": true, 00:13:12.377 "get_zone_info": false, 00:13:12.377 "zone_management": false, 00:13:12.377 "zone_append": false, 00:13:12.377 "compare": false, 00:13:12.377 "compare_and_write": false, 00:13:12.377 "abort": true, 00:13:12.377 "seek_hole": false, 00:13:12.377 "seek_data": false, 00:13:12.377 "copy": true, 00:13:12.377 "nvme_iov_md": false 00:13:12.377 }, 00:13:12.377 "memory_domains": [ 00:13:12.377 { 00:13:12.377 "dma_device_id": "system", 00:13:12.377 "dma_device_type": 1 00:13:12.377 }, 00:13:12.377 { 00:13:12.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.377 "dma_device_type": 2 00:13:12.377 } 00:13:12.377 ], 00:13:12.377 "driver_specific": {} 00:13:12.377 } 00:13:12.377 ] 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.377 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.378 "name": "Existed_Raid", 00:13:12.378 "uuid": "4c86c6a0-21a5-4f8a-8b07-e9d299fd62e2", 00:13:12.378 "strip_size_kb": 64, 00:13:12.378 "state": "online", 00:13:12.378 "raid_level": "concat", 00:13:12.378 "superblock": false, 00:13:12.378 "num_base_bdevs": 4, 00:13:12.378 "num_base_bdevs_discovered": 4, 00:13:12.378 "num_base_bdevs_operational": 4, 00:13:12.378 "base_bdevs_list": [ 00:13:12.378 { 00:13:12.378 "name": "BaseBdev1", 00:13:12.378 "uuid": "dd40c4a2-9d4b-4e1f-b21c-b3059d3b203a", 00:13:12.378 "is_configured": true, 00:13:12.378 "data_offset": 0, 00:13:12.378 "data_size": 65536 00:13:12.378 }, 00:13:12.378 { 00:13:12.378 "name": "BaseBdev2", 00:13:12.378 "uuid": "58c0af58-ed15-46fb-bee6-1e698883be23", 00:13:12.378 "is_configured": true, 00:13:12.378 "data_offset": 0, 00:13:12.378 "data_size": 65536 00:13:12.378 }, 00:13:12.378 { 00:13:12.378 "name": "BaseBdev3", 00:13:12.378 "uuid": "3362cf7e-af86-4837-987a-4d14af466b88", 00:13:12.378 "is_configured": true, 00:13:12.378 "data_offset": 0, 00:13:12.378 "data_size": 65536 00:13:12.378 }, 00:13:12.378 { 00:13:12.378 "name": "BaseBdev4", 00:13:12.378 "uuid": "79d02dbc-1a56-46ae-9d66-87312e380ec9", 00:13:12.378 "is_configured": true, 00:13:12.378 "data_offset": 0, 00:13:12.378 "data_size": 65536 00:13:12.378 } 00:13:12.378 ] 00:13:12.378 }' 00:13:12.378 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.378 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:12.945 [2024-11-15 10:41:43.308534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.945 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:12.945 "name": "Existed_Raid", 00:13:12.945 "aliases": [ 00:13:12.945 "4c86c6a0-21a5-4f8a-8b07-e9d299fd62e2" 00:13:12.945 ], 00:13:12.945 "product_name": "Raid Volume", 00:13:12.945 "block_size": 512, 00:13:12.945 "num_blocks": 262144, 00:13:12.945 "uuid": "4c86c6a0-21a5-4f8a-8b07-e9d299fd62e2", 00:13:12.945 "assigned_rate_limits": { 00:13:12.945 "rw_ios_per_sec": 0, 00:13:12.945 "rw_mbytes_per_sec": 0, 00:13:12.945 "r_mbytes_per_sec": 0, 00:13:12.945 "w_mbytes_per_sec": 0 00:13:12.945 }, 00:13:12.945 "claimed": false, 00:13:12.945 "zoned": false, 00:13:12.945 "supported_io_types": { 00:13:12.945 "read": true, 00:13:12.945 "write": true, 00:13:12.945 "unmap": true, 00:13:12.946 "flush": true, 00:13:12.946 "reset": true, 00:13:12.946 "nvme_admin": false, 00:13:12.946 "nvme_io": false, 00:13:12.946 "nvme_io_md": false, 00:13:12.946 "write_zeroes": true, 00:13:12.946 "zcopy": false, 00:13:12.946 "get_zone_info": false, 00:13:12.946 "zone_management": false, 00:13:12.946 "zone_append": false, 00:13:12.946 "compare": false, 00:13:12.946 "compare_and_write": false, 00:13:12.946 "abort": false, 00:13:12.946 "seek_hole": false, 00:13:12.946 "seek_data": false, 00:13:12.946 "copy": false, 00:13:12.946 "nvme_iov_md": false 00:13:12.946 }, 00:13:12.946 "memory_domains": [ 00:13:12.946 { 00:13:12.946 "dma_device_id": "system", 00:13:12.946 "dma_device_type": 1 00:13:12.946 }, 00:13:12.946 { 00:13:12.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.946 "dma_device_type": 2 00:13:12.946 }, 00:13:12.946 { 00:13:12.946 "dma_device_id": "system", 00:13:12.946 "dma_device_type": 1 00:13:12.946 }, 00:13:12.946 { 00:13:12.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.946 "dma_device_type": 2 00:13:12.946 }, 00:13:12.946 { 00:13:12.946 "dma_device_id": "system", 00:13:12.946 "dma_device_type": 1 00:13:12.946 }, 00:13:12.946 { 00:13:12.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.946 "dma_device_type": 2 00:13:12.946 }, 00:13:12.946 { 00:13:12.946 "dma_device_id": "system", 00:13:12.946 "dma_device_type": 1 00:13:12.946 }, 00:13:12.946 { 00:13:12.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.946 "dma_device_type": 2 00:13:12.946 } 00:13:12.946 ], 00:13:12.946 "driver_specific": { 00:13:12.946 "raid": { 00:13:12.946 "uuid": "4c86c6a0-21a5-4f8a-8b07-e9d299fd62e2", 00:13:12.946 "strip_size_kb": 64, 00:13:12.946 "state": "online", 00:13:12.946 "raid_level": "concat", 00:13:12.946 "superblock": false, 00:13:12.946 "num_base_bdevs": 4, 00:13:12.946 "num_base_bdevs_discovered": 4, 00:13:12.946 "num_base_bdevs_operational": 4, 00:13:12.946 "base_bdevs_list": [ 00:13:12.946 { 00:13:12.946 "name": "BaseBdev1", 00:13:12.946 "uuid": "dd40c4a2-9d4b-4e1f-b21c-b3059d3b203a", 00:13:12.946 "is_configured": true, 00:13:12.946 "data_offset": 0, 00:13:12.946 "data_size": 65536 00:13:12.946 }, 00:13:12.946 { 00:13:12.946 "name": "BaseBdev2", 00:13:12.946 "uuid": "58c0af58-ed15-46fb-bee6-1e698883be23", 00:13:12.946 "is_configured": true, 00:13:12.946 "data_offset": 0, 00:13:12.946 "data_size": 65536 00:13:12.946 }, 00:13:12.946 { 00:13:12.946 "name": "BaseBdev3", 00:13:12.946 "uuid": "3362cf7e-af86-4837-987a-4d14af466b88", 00:13:12.946 "is_configured": true, 00:13:12.946 "data_offset": 0, 00:13:12.946 "data_size": 65536 00:13:12.946 }, 00:13:12.946 { 00:13:12.946 "name": "BaseBdev4", 00:13:12.946 "uuid": "79d02dbc-1a56-46ae-9d66-87312e380ec9", 00:13:12.946 "is_configured": true, 00:13:12.946 "data_offset": 0, 00:13:12.946 "data_size": 65536 00:13:12.946 } 00:13:12.946 ] 00:13:12.946 } 00:13:12.946 } 00:13:12.946 }' 00:13:12.946 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:12.946 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:12.946 BaseBdev2 00:13:12.946 BaseBdev3 00:13:12.946 BaseBdev4' 00:13:12.946 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.946 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:12.946 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.946 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:12.946 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.946 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.946 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.946 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.205 [2024-11-15 10:41:43.668269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:13.205 [2024-11-15 10:41:43.668310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.205 [2024-11-15 10:41:43.668392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.205 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.464 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.464 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.464 "name": "Existed_Raid", 00:13:13.464 "uuid": "4c86c6a0-21a5-4f8a-8b07-e9d299fd62e2", 00:13:13.464 "strip_size_kb": 64, 00:13:13.464 "state": "offline", 00:13:13.464 "raid_level": "concat", 00:13:13.464 "superblock": false, 00:13:13.464 "num_base_bdevs": 4, 00:13:13.464 "num_base_bdevs_discovered": 3, 00:13:13.464 "num_base_bdevs_operational": 3, 00:13:13.464 "base_bdevs_list": [ 00:13:13.464 { 00:13:13.464 "name": null, 00:13:13.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.464 "is_configured": false, 00:13:13.464 "data_offset": 0, 00:13:13.464 "data_size": 65536 00:13:13.464 }, 00:13:13.464 { 00:13:13.464 "name": "BaseBdev2", 00:13:13.464 "uuid": "58c0af58-ed15-46fb-bee6-1e698883be23", 00:13:13.464 "is_configured": true, 00:13:13.464 "data_offset": 0, 00:13:13.464 "data_size": 65536 00:13:13.464 }, 00:13:13.464 { 00:13:13.464 "name": "BaseBdev3", 00:13:13.464 "uuid": "3362cf7e-af86-4837-987a-4d14af466b88", 00:13:13.464 "is_configured": true, 00:13:13.464 "data_offset": 0, 00:13:13.464 "data_size": 65536 00:13:13.464 }, 00:13:13.464 { 00:13:13.464 "name": "BaseBdev4", 00:13:13.464 "uuid": "79d02dbc-1a56-46ae-9d66-87312e380ec9", 00:13:13.464 "is_configured": true, 00:13:13.464 "data_offset": 0, 00:13:13.464 "data_size": 65536 00:13:13.464 } 00:13:13.464 ] 00:13:13.464 }' 00:13:13.464 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.464 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.722 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:13.722 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.722 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.722 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:13.722 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.722 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.722 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.990 [2024-11-15 10:41:44.307840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.990 [2024-11-15 10:41:44.448069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.990 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.249 [2024-11-15 10:41:44.579925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:14.249 [2024-11-15 10:41:44.579989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.249 BaseBdev2 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.249 [ 00:13:14.249 { 00:13:14.249 "name": "BaseBdev2", 00:13:14.249 "aliases": [ 00:13:14.249 "0d6990df-71cf-4cef-8685-1f63ed23b84c" 00:13:14.249 ], 00:13:14.249 "product_name": "Malloc disk", 00:13:14.249 "block_size": 512, 00:13:14.249 "num_blocks": 65536, 00:13:14.249 "uuid": "0d6990df-71cf-4cef-8685-1f63ed23b84c", 00:13:14.249 "assigned_rate_limits": { 00:13:14.249 "rw_ios_per_sec": 0, 00:13:14.249 "rw_mbytes_per_sec": 0, 00:13:14.249 "r_mbytes_per_sec": 0, 00:13:14.249 "w_mbytes_per_sec": 0 00:13:14.249 }, 00:13:14.249 "claimed": false, 00:13:14.249 "zoned": false, 00:13:14.249 "supported_io_types": { 00:13:14.249 "read": true, 00:13:14.249 "write": true, 00:13:14.249 "unmap": true, 00:13:14.249 "flush": true, 00:13:14.249 "reset": true, 00:13:14.249 "nvme_admin": false, 00:13:14.249 "nvme_io": false, 00:13:14.249 "nvme_io_md": false, 00:13:14.249 "write_zeroes": true, 00:13:14.249 "zcopy": true, 00:13:14.249 "get_zone_info": false, 00:13:14.249 "zone_management": false, 00:13:14.249 "zone_append": false, 00:13:14.249 "compare": false, 00:13:14.249 "compare_and_write": false, 00:13:14.249 "abort": true, 00:13:14.249 "seek_hole": false, 00:13:14.249 "seek_data": false, 00:13:14.249 "copy": true, 00:13:14.249 "nvme_iov_md": false 00:13:14.249 }, 00:13:14.249 "memory_domains": [ 00:13:14.249 { 00:13:14.249 "dma_device_id": "system", 00:13:14.249 "dma_device_type": 1 00:13:14.249 }, 00:13:14.249 { 00:13:14.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.249 "dma_device_type": 2 00:13:14.249 } 00:13:14.249 ], 00:13:14.249 "driver_specific": {} 00:13:14.249 } 00:13:14.249 ] 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.249 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.508 BaseBdev3 00:13:14.508 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.508 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.509 [ 00:13:14.509 { 00:13:14.509 "name": "BaseBdev3", 00:13:14.509 "aliases": [ 00:13:14.509 "4f44fbc0-808b-457a-a88f-d87cd34d0d3f" 00:13:14.509 ], 00:13:14.509 "product_name": "Malloc disk", 00:13:14.509 "block_size": 512, 00:13:14.509 "num_blocks": 65536, 00:13:14.509 "uuid": "4f44fbc0-808b-457a-a88f-d87cd34d0d3f", 00:13:14.509 "assigned_rate_limits": { 00:13:14.509 "rw_ios_per_sec": 0, 00:13:14.509 "rw_mbytes_per_sec": 0, 00:13:14.509 "r_mbytes_per_sec": 0, 00:13:14.509 "w_mbytes_per_sec": 0 00:13:14.509 }, 00:13:14.509 "claimed": false, 00:13:14.509 "zoned": false, 00:13:14.509 "supported_io_types": { 00:13:14.509 "read": true, 00:13:14.509 "write": true, 00:13:14.509 "unmap": true, 00:13:14.509 "flush": true, 00:13:14.509 "reset": true, 00:13:14.509 "nvme_admin": false, 00:13:14.509 "nvme_io": false, 00:13:14.509 "nvme_io_md": false, 00:13:14.509 "write_zeroes": true, 00:13:14.509 "zcopy": true, 00:13:14.509 "get_zone_info": false, 00:13:14.509 "zone_management": false, 00:13:14.509 "zone_append": false, 00:13:14.509 "compare": false, 00:13:14.509 "compare_and_write": false, 00:13:14.509 "abort": true, 00:13:14.509 "seek_hole": false, 00:13:14.509 "seek_data": false, 00:13:14.509 "copy": true, 00:13:14.509 "nvme_iov_md": false 00:13:14.509 }, 00:13:14.509 "memory_domains": [ 00:13:14.509 { 00:13:14.509 "dma_device_id": "system", 00:13:14.509 "dma_device_type": 1 00:13:14.509 }, 00:13:14.509 { 00:13:14.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.509 "dma_device_type": 2 00:13:14.509 } 00:13:14.509 ], 00:13:14.509 "driver_specific": {} 00:13:14.509 } 00:13:14.509 ] 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.509 BaseBdev4 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.509 [ 00:13:14.509 { 00:13:14.509 "name": "BaseBdev4", 00:13:14.509 "aliases": [ 00:13:14.509 "7b502b89-dcc2-4749-8bb9-960d485859d2" 00:13:14.509 ], 00:13:14.509 "product_name": "Malloc disk", 00:13:14.509 "block_size": 512, 00:13:14.509 "num_blocks": 65536, 00:13:14.509 "uuid": "7b502b89-dcc2-4749-8bb9-960d485859d2", 00:13:14.509 "assigned_rate_limits": { 00:13:14.509 "rw_ios_per_sec": 0, 00:13:14.509 "rw_mbytes_per_sec": 0, 00:13:14.509 "r_mbytes_per_sec": 0, 00:13:14.509 "w_mbytes_per_sec": 0 00:13:14.509 }, 00:13:14.509 "claimed": false, 00:13:14.509 "zoned": false, 00:13:14.509 "supported_io_types": { 00:13:14.509 "read": true, 00:13:14.509 "write": true, 00:13:14.509 "unmap": true, 00:13:14.509 "flush": true, 00:13:14.509 "reset": true, 00:13:14.509 "nvme_admin": false, 00:13:14.509 "nvme_io": false, 00:13:14.509 "nvme_io_md": false, 00:13:14.509 "write_zeroes": true, 00:13:14.509 "zcopy": true, 00:13:14.509 "get_zone_info": false, 00:13:14.509 "zone_management": false, 00:13:14.509 "zone_append": false, 00:13:14.509 "compare": false, 00:13:14.509 "compare_and_write": false, 00:13:14.509 "abort": true, 00:13:14.509 "seek_hole": false, 00:13:14.509 "seek_data": false, 00:13:14.509 "copy": true, 00:13:14.509 "nvme_iov_md": false 00:13:14.509 }, 00:13:14.509 "memory_domains": [ 00:13:14.509 { 00:13:14.509 "dma_device_id": "system", 00:13:14.509 "dma_device_type": 1 00:13:14.509 }, 00:13:14.509 { 00:13:14.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.509 "dma_device_type": 2 00:13:14.509 } 00:13:14.509 ], 00:13:14.509 "driver_specific": {} 00:13:14.509 } 00:13:14.509 ] 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.509 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.509 [2024-11-15 10:41:44.921772] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.509 [2024-11-15 10:41:44.921825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.509 [2024-11-15 10:41:44.921855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.510 [2024-11-15 10:41:44.924070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.510 [2024-11-15 10:41:44.924148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.510 "name": "Existed_Raid", 00:13:14.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.510 "strip_size_kb": 64, 00:13:14.510 "state": "configuring", 00:13:14.510 "raid_level": "concat", 00:13:14.510 "superblock": false, 00:13:14.510 "num_base_bdevs": 4, 00:13:14.510 "num_base_bdevs_discovered": 3, 00:13:14.510 "num_base_bdevs_operational": 4, 00:13:14.510 "base_bdevs_list": [ 00:13:14.510 { 00:13:14.510 "name": "BaseBdev1", 00:13:14.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.510 "is_configured": false, 00:13:14.510 "data_offset": 0, 00:13:14.510 "data_size": 0 00:13:14.510 }, 00:13:14.510 { 00:13:14.510 "name": "BaseBdev2", 00:13:14.510 "uuid": "0d6990df-71cf-4cef-8685-1f63ed23b84c", 00:13:14.510 "is_configured": true, 00:13:14.510 "data_offset": 0, 00:13:14.510 "data_size": 65536 00:13:14.510 }, 00:13:14.510 { 00:13:14.510 "name": "BaseBdev3", 00:13:14.510 "uuid": "4f44fbc0-808b-457a-a88f-d87cd34d0d3f", 00:13:14.510 "is_configured": true, 00:13:14.510 "data_offset": 0, 00:13:14.510 "data_size": 65536 00:13:14.510 }, 00:13:14.510 { 00:13:14.510 "name": "BaseBdev4", 00:13:14.510 "uuid": "7b502b89-dcc2-4749-8bb9-960d485859d2", 00:13:14.510 "is_configured": true, 00:13:14.510 "data_offset": 0, 00:13:14.510 "data_size": 65536 00:13:14.510 } 00:13:14.510 ] 00:13:14.510 }' 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.510 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.077 [2024-11-15 10:41:45.433944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.077 "name": "Existed_Raid", 00:13:15.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.077 "strip_size_kb": 64, 00:13:15.077 "state": "configuring", 00:13:15.077 "raid_level": "concat", 00:13:15.077 "superblock": false, 00:13:15.077 "num_base_bdevs": 4, 00:13:15.077 "num_base_bdevs_discovered": 2, 00:13:15.077 "num_base_bdevs_operational": 4, 00:13:15.077 "base_bdevs_list": [ 00:13:15.077 { 00:13:15.077 "name": "BaseBdev1", 00:13:15.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.077 "is_configured": false, 00:13:15.077 "data_offset": 0, 00:13:15.077 "data_size": 0 00:13:15.077 }, 00:13:15.077 { 00:13:15.077 "name": null, 00:13:15.077 "uuid": "0d6990df-71cf-4cef-8685-1f63ed23b84c", 00:13:15.077 "is_configured": false, 00:13:15.077 "data_offset": 0, 00:13:15.077 "data_size": 65536 00:13:15.077 }, 00:13:15.077 { 00:13:15.077 "name": "BaseBdev3", 00:13:15.077 "uuid": "4f44fbc0-808b-457a-a88f-d87cd34d0d3f", 00:13:15.077 "is_configured": true, 00:13:15.077 "data_offset": 0, 00:13:15.077 "data_size": 65536 00:13:15.077 }, 00:13:15.077 { 00:13:15.077 "name": "BaseBdev4", 00:13:15.077 "uuid": "7b502b89-dcc2-4749-8bb9-960d485859d2", 00:13:15.077 "is_configured": true, 00:13:15.077 "data_offset": 0, 00:13:15.077 "data_size": 65536 00:13:15.077 } 00:13:15.077 ] 00:13:15.077 }' 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.077 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.645 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:15.645 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.645 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.645 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.645 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.645 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:15.645 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:15.645 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.645 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.645 [2024-11-15 10:41:46.023743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.645 BaseBdev1 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.645 [ 00:13:15.645 { 00:13:15.645 "name": "BaseBdev1", 00:13:15.645 "aliases": [ 00:13:15.645 "de7f5f9c-496d-46e1-8448-f591ca455605" 00:13:15.645 ], 00:13:15.645 "product_name": "Malloc disk", 00:13:15.645 "block_size": 512, 00:13:15.645 "num_blocks": 65536, 00:13:15.645 "uuid": "de7f5f9c-496d-46e1-8448-f591ca455605", 00:13:15.645 "assigned_rate_limits": { 00:13:15.645 "rw_ios_per_sec": 0, 00:13:15.645 "rw_mbytes_per_sec": 0, 00:13:15.645 "r_mbytes_per_sec": 0, 00:13:15.645 "w_mbytes_per_sec": 0 00:13:15.645 }, 00:13:15.645 "claimed": true, 00:13:15.645 "claim_type": "exclusive_write", 00:13:15.645 "zoned": false, 00:13:15.645 "supported_io_types": { 00:13:15.645 "read": true, 00:13:15.645 "write": true, 00:13:15.645 "unmap": true, 00:13:15.645 "flush": true, 00:13:15.645 "reset": true, 00:13:15.645 "nvme_admin": false, 00:13:15.645 "nvme_io": false, 00:13:15.645 "nvme_io_md": false, 00:13:15.645 "write_zeroes": true, 00:13:15.645 "zcopy": true, 00:13:15.645 "get_zone_info": false, 00:13:15.645 "zone_management": false, 00:13:15.645 "zone_append": false, 00:13:15.645 "compare": false, 00:13:15.645 "compare_and_write": false, 00:13:15.645 "abort": true, 00:13:15.645 "seek_hole": false, 00:13:15.645 "seek_data": false, 00:13:15.645 "copy": true, 00:13:15.645 "nvme_iov_md": false 00:13:15.645 }, 00:13:15.645 "memory_domains": [ 00:13:15.645 { 00:13:15.645 "dma_device_id": "system", 00:13:15.645 "dma_device_type": 1 00:13:15.645 }, 00:13:15.645 { 00:13:15.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.645 "dma_device_type": 2 00:13:15.645 } 00:13:15.645 ], 00:13:15.645 "driver_specific": {} 00:13:15.645 } 00:13:15.645 ] 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.645 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.646 "name": "Existed_Raid", 00:13:15.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.646 "strip_size_kb": 64, 00:13:15.646 "state": "configuring", 00:13:15.646 "raid_level": "concat", 00:13:15.646 "superblock": false, 00:13:15.646 "num_base_bdevs": 4, 00:13:15.646 "num_base_bdevs_discovered": 3, 00:13:15.646 "num_base_bdevs_operational": 4, 00:13:15.646 "base_bdevs_list": [ 00:13:15.646 { 00:13:15.646 "name": "BaseBdev1", 00:13:15.646 "uuid": "de7f5f9c-496d-46e1-8448-f591ca455605", 00:13:15.646 "is_configured": true, 00:13:15.646 "data_offset": 0, 00:13:15.646 "data_size": 65536 00:13:15.646 }, 00:13:15.646 { 00:13:15.646 "name": null, 00:13:15.646 "uuid": "0d6990df-71cf-4cef-8685-1f63ed23b84c", 00:13:15.646 "is_configured": false, 00:13:15.646 "data_offset": 0, 00:13:15.646 "data_size": 65536 00:13:15.646 }, 00:13:15.646 { 00:13:15.646 "name": "BaseBdev3", 00:13:15.646 "uuid": "4f44fbc0-808b-457a-a88f-d87cd34d0d3f", 00:13:15.646 "is_configured": true, 00:13:15.646 "data_offset": 0, 00:13:15.646 "data_size": 65536 00:13:15.646 }, 00:13:15.646 { 00:13:15.646 "name": "BaseBdev4", 00:13:15.646 "uuid": "7b502b89-dcc2-4749-8bb9-960d485859d2", 00:13:15.646 "is_configured": true, 00:13:15.646 "data_offset": 0, 00:13:15.646 "data_size": 65536 00:13:15.646 } 00:13:15.646 ] 00:13:15.646 }' 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.646 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.212 [2024-11-15 10:41:46.636019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.212 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.212 "name": "Existed_Raid", 00:13:16.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.212 "strip_size_kb": 64, 00:13:16.212 "state": "configuring", 00:13:16.212 "raid_level": "concat", 00:13:16.212 "superblock": false, 00:13:16.212 "num_base_bdevs": 4, 00:13:16.212 "num_base_bdevs_discovered": 2, 00:13:16.212 "num_base_bdevs_operational": 4, 00:13:16.212 "base_bdevs_list": [ 00:13:16.212 { 00:13:16.212 "name": "BaseBdev1", 00:13:16.212 "uuid": "de7f5f9c-496d-46e1-8448-f591ca455605", 00:13:16.212 "is_configured": true, 00:13:16.212 "data_offset": 0, 00:13:16.212 "data_size": 65536 00:13:16.212 }, 00:13:16.212 { 00:13:16.212 "name": null, 00:13:16.212 "uuid": "0d6990df-71cf-4cef-8685-1f63ed23b84c", 00:13:16.213 "is_configured": false, 00:13:16.213 "data_offset": 0, 00:13:16.213 "data_size": 65536 00:13:16.213 }, 00:13:16.213 { 00:13:16.213 "name": null, 00:13:16.213 "uuid": "4f44fbc0-808b-457a-a88f-d87cd34d0d3f", 00:13:16.213 "is_configured": false, 00:13:16.213 "data_offset": 0, 00:13:16.213 "data_size": 65536 00:13:16.213 }, 00:13:16.213 { 00:13:16.213 "name": "BaseBdev4", 00:13:16.213 "uuid": "7b502b89-dcc2-4749-8bb9-960d485859d2", 00:13:16.213 "is_configured": true, 00:13:16.213 "data_offset": 0, 00:13:16.213 "data_size": 65536 00:13:16.213 } 00:13:16.213 ] 00:13:16.213 }' 00:13:16.213 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.213 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.780 [2024-11-15 10:41:47.196136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.780 "name": "Existed_Raid", 00:13:16.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.780 "strip_size_kb": 64, 00:13:16.780 "state": "configuring", 00:13:16.780 "raid_level": "concat", 00:13:16.780 "superblock": false, 00:13:16.780 "num_base_bdevs": 4, 00:13:16.780 "num_base_bdevs_discovered": 3, 00:13:16.780 "num_base_bdevs_operational": 4, 00:13:16.780 "base_bdevs_list": [ 00:13:16.780 { 00:13:16.780 "name": "BaseBdev1", 00:13:16.780 "uuid": "de7f5f9c-496d-46e1-8448-f591ca455605", 00:13:16.780 "is_configured": true, 00:13:16.780 "data_offset": 0, 00:13:16.780 "data_size": 65536 00:13:16.780 }, 00:13:16.780 { 00:13:16.780 "name": null, 00:13:16.780 "uuid": "0d6990df-71cf-4cef-8685-1f63ed23b84c", 00:13:16.780 "is_configured": false, 00:13:16.780 "data_offset": 0, 00:13:16.780 "data_size": 65536 00:13:16.780 }, 00:13:16.780 { 00:13:16.780 "name": "BaseBdev3", 00:13:16.780 "uuid": "4f44fbc0-808b-457a-a88f-d87cd34d0d3f", 00:13:16.780 "is_configured": true, 00:13:16.780 "data_offset": 0, 00:13:16.780 "data_size": 65536 00:13:16.780 }, 00:13:16.780 { 00:13:16.780 "name": "BaseBdev4", 00:13:16.780 "uuid": "7b502b89-dcc2-4749-8bb9-960d485859d2", 00:13:16.780 "is_configured": true, 00:13:16.780 "data_offset": 0, 00:13:16.780 "data_size": 65536 00:13:16.780 } 00:13:16.780 ] 00:13:16.780 }' 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.780 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.347 [2024-11-15 10:41:47.752326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.347 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.348 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.348 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.348 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.348 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.348 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.348 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.348 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.348 "name": "Existed_Raid", 00:13:17.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.348 "strip_size_kb": 64, 00:13:17.348 "state": "configuring", 00:13:17.348 "raid_level": "concat", 00:13:17.348 "superblock": false, 00:13:17.348 "num_base_bdevs": 4, 00:13:17.348 "num_base_bdevs_discovered": 2, 00:13:17.348 "num_base_bdevs_operational": 4, 00:13:17.348 "base_bdevs_list": [ 00:13:17.348 { 00:13:17.348 "name": null, 00:13:17.348 "uuid": "de7f5f9c-496d-46e1-8448-f591ca455605", 00:13:17.348 "is_configured": false, 00:13:17.348 "data_offset": 0, 00:13:17.348 "data_size": 65536 00:13:17.348 }, 00:13:17.348 { 00:13:17.348 "name": null, 00:13:17.348 "uuid": "0d6990df-71cf-4cef-8685-1f63ed23b84c", 00:13:17.348 "is_configured": false, 00:13:17.348 "data_offset": 0, 00:13:17.348 "data_size": 65536 00:13:17.348 }, 00:13:17.348 { 00:13:17.348 "name": "BaseBdev3", 00:13:17.348 "uuid": "4f44fbc0-808b-457a-a88f-d87cd34d0d3f", 00:13:17.348 "is_configured": true, 00:13:17.348 "data_offset": 0, 00:13:17.348 "data_size": 65536 00:13:17.348 }, 00:13:17.348 { 00:13:17.348 "name": "BaseBdev4", 00:13:17.348 "uuid": "7b502b89-dcc2-4749-8bb9-960d485859d2", 00:13:17.348 "is_configured": true, 00:13:17.348 "data_offset": 0, 00:13:17.348 "data_size": 65536 00:13:17.348 } 00:13:17.348 ] 00:13:17.348 }' 00:13:17.348 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.348 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.914 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.914 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.915 [2024-11-15 10:41:48.388047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.915 "name": "Existed_Raid", 00:13:17.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.915 "strip_size_kb": 64, 00:13:17.915 "state": "configuring", 00:13:17.915 "raid_level": "concat", 00:13:17.915 "superblock": false, 00:13:17.915 "num_base_bdevs": 4, 00:13:17.915 "num_base_bdevs_discovered": 3, 00:13:17.915 "num_base_bdevs_operational": 4, 00:13:17.915 "base_bdevs_list": [ 00:13:17.915 { 00:13:17.915 "name": null, 00:13:17.915 "uuid": "de7f5f9c-496d-46e1-8448-f591ca455605", 00:13:17.915 "is_configured": false, 00:13:17.915 "data_offset": 0, 00:13:17.915 "data_size": 65536 00:13:17.915 }, 00:13:17.915 { 00:13:17.915 "name": "BaseBdev2", 00:13:17.915 "uuid": "0d6990df-71cf-4cef-8685-1f63ed23b84c", 00:13:17.915 "is_configured": true, 00:13:17.915 "data_offset": 0, 00:13:17.915 "data_size": 65536 00:13:17.915 }, 00:13:17.915 { 00:13:17.915 "name": "BaseBdev3", 00:13:17.915 "uuid": "4f44fbc0-808b-457a-a88f-d87cd34d0d3f", 00:13:17.915 "is_configured": true, 00:13:17.915 "data_offset": 0, 00:13:17.915 "data_size": 65536 00:13:17.915 }, 00:13:17.915 { 00:13:17.915 "name": "BaseBdev4", 00:13:17.915 "uuid": "7b502b89-dcc2-4749-8bb9-960d485859d2", 00:13:17.915 "is_configured": true, 00:13:17.915 "data_offset": 0, 00:13:17.915 "data_size": 65536 00:13:17.915 } 00:13:17.915 ] 00:13:17.915 }' 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.915 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u de7f5f9c-496d-46e1-8448-f591ca455605 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.482 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.482 [2024-11-15 10:41:49.029924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:18.482 [2024-11-15 10:41:49.029984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:18.482 [2024-11-15 10:41:49.029997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:18.482 [2024-11-15 10:41:49.030311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:18.482 [2024-11-15 10:41:49.030514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:18.482 [2024-11-15 10:41:49.030542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:18.482 [2024-11-15 10:41:49.030813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.482 NewBaseBdev 00:13:18.482 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.482 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:18.482 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:18.482 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:18.482 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:18.482 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:18.482 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:18.482 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:18.482 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.482 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.741 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.741 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:18.741 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.741 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.741 [ 00:13:18.741 { 00:13:18.741 "name": "NewBaseBdev", 00:13:18.741 "aliases": [ 00:13:18.741 "de7f5f9c-496d-46e1-8448-f591ca455605" 00:13:18.741 ], 00:13:18.741 "product_name": "Malloc disk", 00:13:18.741 "block_size": 512, 00:13:18.741 "num_blocks": 65536, 00:13:18.741 "uuid": "de7f5f9c-496d-46e1-8448-f591ca455605", 00:13:18.741 "assigned_rate_limits": { 00:13:18.741 "rw_ios_per_sec": 0, 00:13:18.741 "rw_mbytes_per_sec": 0, 00:13:18.741 "r_mbytes_per_sec": 0, 00:13:18.741 "w_mbytes_per_sec": 0 00:13:18.741 }, 00:13:18.741 "claimed": true, 00:13:18.741 "claim_type": "exclusive_write", 00:13:18.741 "zoned": false, 00:13:18.742 "supported_io_types": { 00:13:18.742 "read": true, 00:13:18.742 "write": true, 00:13:18.742 "unmap": true, 00:13:18.742 "flush": true, 00:13:18.742 "reset": true, 00:13:18.742 "nvme_admin": false, 00:13:18.742 "nvme_io": false, 00:13:18.742 "nvme_io_md": false, 00:13:18.742 "write_zeroes": true, 00:13:18.742 "zcopy": true, 00:13:18.742 "get_zone_info": false, 00:13:18.742 "zone_management": false, 00:13:18.742 "zone_append": false, 00:13:18.742 "compare": false, 00:13:18.742 "compare_and_write": false, 00:13:18.742 "abort": true, 00:13:18.742 "seek_hole": false, 00:13:18.742 "seek_data": false, 00:13:18.742 "copy": true, 00:13:18.742 "nvme_iov_md": false 00:13:18.742 }, 00:13:18.742 "memory_domains": [ 00:13:18.742 { 00:13:18.742 "dma_device_id": "system", 00:13:18.742 "dma_device_type": 1 00:13:18.742 }, 00:13:18.742 { 00:13:18.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.742 "dma_device_type": 2 00:13:18.742 } 00:13:18.742 ], 00:13:18.742 "driver_specific": {} 00:13:18.742 } 00:13:18.742 ] 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.742 "name": "Existed_Raid", 00:13:18.742 "uuid": "07e81ebe-fc2e-4fbb-bb2a-ea77c7786606", 00:13:18.742 "strip_size_kb": 64, 00:13:18.742 "state": "online", 00:13:18.742 "raid_level": "concat", 00:13:18.742 "superblock": false, 00:13:18.742 "num_base_bdevs": 4, 00:13:18.742 "num_base_bdevs_discovered": 4, 00:13:18.742 "num_base_bdevs_operational": 4, 00:13:18.742 "base_bdevs_list": [ 00:13:18.742 { 00:13:18.742 "name": "NewBaseBdev", 00:13:18.742 "uuid": "de7f5f9c-496d-46e1-8448-f591ca455605", 00:13:18.742 "is_configured": true, 00:13:18.742 "data_offset": 0, 00:13:18.742 "data_size": 65536 00:13:18.742 }, 00:13:18.742 { 00:13:18.742 "name": "BaseBdev2", 00:13:18.742 "uuid": "0d6990df-71cf-4cef-8685-1f63ed23b84c", 00:13:18.742 "is_configured": true, 00:13:18.742 "data_offset": 0, 00:13:18.742 "data_size": 65536 00:13:18.742 }, 00:13:18.742 { 00:13:18.742 "name": "BaseBdev3", 00:13:18.742 "uuid": "4f44fbc0-808b-457a-a88f-d87cd34d0d3f", 00:13:18.742 "is_configured": true, 00:13:18.742 "data_offset": 0, 00:13:18.742 "data_size": 65536 00:13:18.742 }, 00:13:18.742 { 00:13:18.742 "name": "BaseBdev4", 00:13:18.742 "uuid": "7b502b89-dcc2-4749-8bb9-960d485859d2", 00:13:18.742 "is_configured": true, 00:13:18.742 "data_offset": 0, 00:13:18.742 "data_size": 65536 00:13:18.742 } 00:13:18.742 ] 00:13:18.742 }' 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.742 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.310 [2024-11-15 10:41:49.610597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.310 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:19.310 "name": "Existed_Raid", 00:13:19.310 "aliases": [ 00:13:19.310 "07e81ebe-fc2e-4fbb-bb2a-ea77c7786606" 00:13:19.310 ], 00:13:19.310 "product_name": "Raid Volume", 00:13:19.310 "block_size": 512, 00:13:19.310 "num_blocks": 262144, 00:13:19.310 "uuid": "07e81ebe-fc2e-4fbb-bb2a-ea77c7786606", 00:13:19.310 "assigned_rate_limits": { 00:13:19.310 "rw_ios_per_sec": 0, 00:13:19.310 "rw_mbytes_per_sec": 0, 00:13:19.310 "r_mbytes_per_sec": 0, 00:13:19.310 "w_mbytes_per_sec": 0 00:13:19.310 }, 00:13:19.310 "claimed": false, 00:13:19.310 "zoned": false, 00:13:19.310 "supported_io_types": { 00:13:19.310 "read": true, 00:13:19.310 "write": true, 00:13:19.310 "unmap": true, 00:13:19.310 "flush": true, 00:13:19.310 "reset": true, 00:13:19.310 "nvme_admin": false, 00:13:19.310 "nvme_io": false, 00:13:19.310 "nvme_io_md": false, 00:13:19.310 "write_zeroes": true, 00:13:19.310 "zcopy": false, 00:13:19.310 "get_zone_info": false, 00:13:19.310 "zone_management": false, 00:13:19.310 "zone_append": false, 00:13:19.310 "compare": false, 00:13:19.310 "compare_and_write": false, 00:13:19.310 "abort": false, 00:13:19.310 "seek_hole": false, 00:13:19.310 "seek_data": false, 00:13:19.310 "copy": false, 00:13:19.310 "nvme_iov_md": false 00:13:19.310 }, 00:13:19.310 "memory_domains": [ 00:13:19.310 { 00:13:19.310 "dma_device_id": "system", 00:13:19.310 "dma_device_type": 1 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.310 "dma_device_type": 2 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "dma_device_id": "system", 00:13:19.310 "dma_device_type": 1 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.310 "dma_device_type": 2 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "dma_device_id": "system", 00:13:19.310 "dma_device_type": 1 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.310 "dma_device_type": 2 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "dma_device_id": "system", 00:13:19.310 "dma_device_type": 1 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.310 "dma_device_type": 2 00:13:19.310 } 00:13:19.310 ], 00:13:19.310 "driver_specific": { 00:13:19.310 "raid": { 00:13:19.310 "uuid": "07e81ebe-fc2e-4fbb-bb2a-ea77c7786606", 00:13:19.310 "strip_size_kb": 64, 00:13:19.310 "state": "online", 00:13:19.310 "raid_level": "concat", 00:13:19.310 "superblock": false, 00:13:19.310 "num_base_bdevs": 4, 00:13:19.310 "num_base_bdevs_discovered": 4, 00:13:19.310 "num_base_bdevs_operational": 4, 00:13:19.310 "base_bdevs_list": [ 00:13:19.310 { 00:13:19.310 "name": "NewBaseBdev", 00:13:19.310 "uuid": "de7f5f9c-496d-46e1-8448-f591ca455605", 00:13:19.310 "is_configured": true, 00:13:19.310 "data_offset": 0, 00:13:19.310 "data_size": 65536 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "name": "BaseBdev2", 00:13:19.310 "uuid": "0d6990df-71cf-4cef-8685-1f63ed23b84c", 00:13:19.310 "is_configured": true, 00:13:19.310 "data_offset": 0, 00:13:19.310 "data_size": 65536 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "name": "BaseBdev3", 00:13:19.310 "uuid": "4f44fbc0-808b-457a-a88f-d87cd34d0d3f", 00:13:19.310 "is_configured": true, 00:13:19.310 "data_offset": 0, 00:13:19.310 "data_size": 65536 00:13:19.310 }, 00:13:19.310 { 00:13:19.310 "name": "BaseBdev4", 00:13:19.310 "uuid": "7b502b89-dcc2-4749-8bb9-960d485859d2", 00:13:19.310 "is_configured": true, 00:13:19.310 "data_offset": 0, 00:13:19.310 "data_size": 65536 00:13:19.310 } 00:13:19.310 ] 00:13:19.310 } 00:13:19.310 } 00:13:19.310 }' 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:19.311 BaseBdev2 00:13:19.311 BaseBdev3 00:13:19.311 BaseBdev4' 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.311 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.570 [2024-11-15 10:41:49.958217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.570 [2024-11-15 10:41:49.958258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.570 [2024-11-15 10:41:49.958374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.570 [2024-11-15 10:41:49.958466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.570 [2024-11-15 10:41:49.958484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71570 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71570 ']' 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71570 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71570 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:19.570 killing process with pid 71570 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71570' 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71570 00:13:19.570 [2024-11-15 10:41:49.996743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.570 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71570 00:13:19.829 [2024-11-15 10:41:50.328093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.763 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:20.763 00:13:20.763 real 0m12.679s 00:13:20.763 user 0m21.445s 00:13:20.763 sys 0m1.541s 00:13:20.763 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:20.763 ************************************ 00:13:20.763 END TEST raid_state_function_test 00:13:20.763 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.763 ************************************ 00:13:21.021 10:41:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:13:21.021 10:41:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:21.021 10:41:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:21.021 10:41:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.021 ************************************ 00:13:21.021 START TEST raid_state_function_test_sb 00:13:21.021 ************************************ 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72253 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:21.021 Process raid pid: 72253 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72253' 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72253 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72253 ']' 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:21.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:21.021 10:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.021 [2024-11-15 10:41:51.482542] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:13:21.021 [2024-11-15 10:41:51.482683] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.280 [2024-11-15 10:41:51.662180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.280 [2024-11-15 10:41:51.787905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.538 [2024-11-15 10:41:51.999231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.538 [2024-11-15 10:41:51.999283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.104 [2024-11-15 10:41:52.522259] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:22.104 [2024-11-15 10:41:52.522326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:22.104 [2024-11-15 10:41:52.522344] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.104 [2024-11-15 10:41:52.522382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.104 [2024-11-15 10:41:52.522393] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.104 [2024-11-15 10:41:52.522407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.104 [2024-11-15 10:41:52.522416] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.104 [2024-11-15 10:41:52.522430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.104 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.105 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.105 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.105 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.105 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.105 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.105 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.105 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.105 "name": "Existed_Raid", 00:13:22.105 "uuid": "ad5f07af-ae1c-4609-9e24-c30ded861240", 00:13:22.105 "strip_size_kb": 64, 00:13:22.105 "state": "configuring", 00:13:22.105 "raid_level": "concat", 00:13:22.105 "superblock": true, 00:13:22.105 "num_base_bdevs": 4, 00:13:22.105 "num_base_bdevs_discovered": 0, 00:13:22.105 "num_base_bdevs_operational": 4, 00:13:22.105 "base_bdevs_list": [ 00:13:22.105 { 00:13:22.105 "name": "BaseBdev1", 00:13:22.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.105 "is_configured": false, 00:13:22.105 "data_offset": 0, 00:13:22.105 "data_size": 0 00:13:22.105 }, 00:13:22.105 { 00:13:22.105 "name": "BaseBdev2", 00:13:22.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.105 "is_configured": false, 00:13:22.105 "data_offset": 0, 00:13:22.105 "data_size": 0 00:13:22.105 }, 00:13:22.105 { 00:13:22.105 "name": "BaseBdev3", 00:13:22.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.105 "is_configured": false, 00:13:22.105 "data_offset": 0, 00:13:22.105 "data_size": 0 00:13:22.105 }, 00:13:22.105 { 00:13:22.105 "name": "BaseBdev4", 00:13:22.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.105 "is_configured": false, 00:13:22.105 "data_offset": 0, 00:13:22.105 "data_size": 0 00:13:22.105 } 00:13:22.105 ] 00:13:22.105 }' 00:13:22.105 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.105 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.672 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.672 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.672 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.672 [2024-11-15 10:41:53.002322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.672 [2024-11-15 10:41:53.002384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.672 [2024-11-15 10:41:53.010334] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:22.672 [2024-11-15 10:41:53.010526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:22.672 [2024-11-15 10:41:53.010553] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.672 [2024-11-15 10:41:53.010571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.672 [2024-11-15 10:41:53.010581] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.672 [2024-11-15 10:41:53.010594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.672 [2024-11-15 10:41:53.010604] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.672 [2024-11-15 10:41:53.010617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.672 [2024-11-15 10:41:53.050694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.672 BaseBdev1 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.672 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.672 [ 00:13:22.672 { 00:13:22.672 "name": "BaseBdev1", 00:13:22.672 "aliases": [ 00:13:22.672 "eace0e5b-43b9-4432-ad23-fd7c0120dd1f" 00:13:22.672 ], 00:13:22.672 "product_name": "Malloc disk", 00:13:22.672 "block_size": 512, 00:13:22.672 "num_blocks": 65536, 00:13:22.673 "uuid": "eace0e5b-43b9-4432-ad23-fd7c0120dd1f", 00:13:22.673 "assigned_rate_limits": { 00:13:22.673 "rw_ios_per_sec": 0, 00:13:22.673 "rw_mbytes_per_sec": 0, 00:13:22.673 "r_mbytes_per_sec": 0, 00:13:22.673 "w_mbytes_per_sec": 0 00:13:22.673 }, 00:13:22.673 "claimed": true, 00:13:22.673 "claim_type": "exclusive_write", 00:13:22.673 "zoned": false, 00:13:22.673 "supported_io_types": { 00:13:22.673 "read": true, 00:13:22.673 "write": true, 00:13:22.673 "unmap": true, 00:13:22.673 "flush": true, 00:13:22.673 "reset": true, 00:13:22.673 "nvme_admin": false, 00:13:22.673 "nvme_io": false, 00:13:22.673 "nvme_io_md": false, 00:13:22.673 "write_zeroes": true, 00:13:22.673 "zcopy": true, 00:13:22.673 "get_zone_info": false, 00:13:22.673 "zone_management": false, 00:13:22.673 "zone_append": false, 00:13:22.673 "compare": false, 00:13:22.673 "compare_and_write": false, 00:13:22.673 "abort": true, 00:13:22.673 "seek_hole": false, 00:13:22.673 "seek_data": false, 00:13:22.673 "copy": true, 00:13:22.673 "nvme_iov_md": false 00:13:22.673 }, 00:13:22.673 "memory_domains": [ 00:13:22.673 { 00:13:22.673 "dma_device_id": "system", 00:13:22.673 "dma_device_type": 1 00:13:22.673 }, 00:13:22.673 { 00:13:22.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.673 "dma_device_type": 2 00:13:22.673 } 00:13:22.673 ], 00:13:22.673 "driver_specific": {} 00:13:22.673 } 00:13:22.673 ] 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.673 "name": "Existed_Raid", 00:13:22.673 "uuid": "565eb3d3-89b6-4603-b636-e96724be7799", 00:13:22.673 "strip_size_kb": 64, 00:13:22.673 "state": "configuring", 00:13:22.673 "raid_level": "concat", 00:13:22.673 "superblock": true, 00:13:22.673 "num_base_bdevs": 4, 00:13:22.673 "num_base_bdevs_discovered": 1, 00:13:22.673 "num_base_bdevs_operational": 4, 00:13:22.673 "base_bdevs_list": [ 00:13:22.673 { 00:13:22.673 "name": "BaseBdev1", 00:13:22.673 "uuid": "eace0e5b-43b9-4432-ad23-fd7c0120dd1f", 00:13:22.673 "is_configured": true, 00:13:22.673 "data_offset": 2048, 00:13:22.673 "data_size": 63488 00:13:22.673 }, 00:13:22.673 { 00:13:22.673 "name": "BaseBdev2", 00:13:22.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.673 "is_configured": false, 00:13:22.673 "data_offset": 0, 00:13:22.673 "data_size": 0 00:13:22.673 }, 00:13:22.673 { 00:13:22.673 "name": "BaseBdev3", 00:13:22.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.673 "is_configured": false, 00:13:22.673 "data_offset": 0, 00:13:22.673 "data_size": 0 00:13:22.673 }, 00:13:22.673 { 00:13:22.673 "name": "BaseBdev4", 00:13:22.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.673 "is_configured": false, 00:13:22.673 "data_offset": 0, 00:13:22.673 "data_size": 0 00:13:22.673 } 00:13:22.673 ] 00:13:22.673 }' 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.673 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.240 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:23.240 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.241 [2024-11-15 10:41:53.606906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.241 [2024-11-15 10:41:53.606984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.241 [2024-11-15 10:41:53.614954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.241 [2024-11-15 10:41:53.617234] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.241 [2024-11-15 10:41:53.617451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.241 [2024-11-15 10:41:53.617481] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:23.241 [2024-11-15 10:41:53.617501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:23.241 [2024-11-15 10:41:53.617512] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:23.241 [2024-11-15 10:41:53.617525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.241 "name": "Existed_Raid", 00:13:23.241 "uuid": "34dfa142-1c07-424c-9735-799b52813315", 00:13:23.241 "strip_size_kb": 64, 00:13:23.241 "state": "configuring", 00:13:23.241 "raid_level": "concat", 00:13:23.241 "superblock": true, 00:13:23.241 "num_base_bdevs": 4, 00:13:23.241 "num_base_bdevs_discovered": 1, 00:13:23.241 "num_base_bdevs_operational": 4, 00:13:23.241 "base_bdevs_list": [ 00:13:23.241 { 00:13:23.241 "name": "BaseBdev1", 00:13:23.241 "uuid": "eace0e5b-43b9-4432-ad23-fd7c0120dd1f", 00:13:23.241 "is_configured": true, 00:13:23.241 "data_offset": 2048, 00:13:23.241 "data_size": 63488 00:13:23.241 }, 00:13:23.241 { 00:13:23.241 "name": "BaseBdev2", 00:13:23.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.241 "is_configured": false, 00:13:23.241 "data_offset": 0, 00:13:23.241 "data_size": 0 00:13:23.241 }, 00:13:23.241 { 00:13:23.241 "name": "BaseBdev3", 00:13:23.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.241 "is_configured": false, 00:13:23.241 "data_offset": 0, 00:13:23.241 "data_size": 0 00:13:23.241 }, 00:13:23.241 { 00:13:23.241 "name": "BaseBdev4", 00:13:23.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.241 "is_configured": false, 00:13:23.241 "data_offset": 0, 00:13:23.241 "data_size": 0 00:13:23.241 } 00:13:23.241 ] 00:13:23.241 }' 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.241 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.808 [2024-11-15 10:41:54.177396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.808 BaseBdev2 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.808 [ 00:13:23.808 { 00:13:23.808 "name": "BaseBdev2", 00:13:23.808 "aliases": [ 00:13:23.808 "ad365b01-b5a8-426b-8a47-00fcf13ca6db" 00:13:23.808 ], 00:13:23.808 "product_name": "Malloc disk", 00:13:23.808 "block_size": 512, 00:13:23.808 "num_blocks": 65536, 00:13:23.808 "uuid": "ad365b01-b5a8-426b-8a47-00fcf13ca6db", 00:13:23.808 "assigned_rate_limits": { 00:13:23.808 "rw_ios_per_sec": 0, 00:13:23.808 "rw_mbytes_per_sec": 0, 00:13:23.808 "r_mbytes_per_sec": 0, 00:13:23.808 "w_mbytes_per_sec": 0 00:13:23.808 }, 00:13:23.808 "claimed": true, 00:13:23.808 "claim_type": "exclusive_write", 00:13:23.808 "zoned": false, 00:13:23.808 "supported_io_types": { 00:13:23.808 "read": true, 00:13:23.808 "write": true, 00:13:23.808 "unmap": true, 00:13:23.808 "flush": true, 00:13:23.808 "reset": true, 00:13:23.808 "nvme_admin": false, 00:13:23.808 "nvme_io": false, 00:13:23.808 "nvme_io_md": false, 00:13:23.808 "write_zeroes": true, 00:13:23.808 "zcopy": true, 00:13:23.808 "get_zone_info": false, 00:13:23.808 "zone_management": false, 00:13:23.808 "zone_append": false, 00:13:23.808 "compare": false, 00:13:23.808 "compare_and_write": false, 00:13:23.808 "abort": true, 00:13:23.808 "seek_hole": false, 00:13:23.808 "seek_data": false, 00:13:23.808 "copy": true, 00:13:23.808 "nvme_iov_md": false 00:13:23.808 }, 00:13:23.808 "memory_domains": [ 00:13:23.808 { 00:13:23.808 "dma_device_id": "system", 00:13:23.808 "dma_device_type": 1 00:13:23.808 }, 00:13:23.808 { 00:13:23.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.808 "dma_device_type": 2 00:13:23.808 } 00:13:23.808 ], 00:13:23.808 "driver_specific": {} 00:13:23.808 } 00:13:23.808 ] 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.808 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.809 "name": "Existed_Raid", 00:13:23.809 "uuid": "34dfa142-1c07-424c-9735-799b52813315", 00:13:23.809 "strip_size_kb": 64, 00:13:23.809 "state": "configuring", 00:13:23.809 "raid_level": "concat", 00:13:23.809 "superblock": true, 00:13:23.809 "num_base_bdevs": 4, 00:13:23.809 "num_base_bdevs_discovered": 2, 00:13:23.809 "num_base_bdevs_operational": 4, 00:13:23.809 "base_bdevs_list": [ 00:13:23.809 { 00:13:23.809 "name": "BaseBdev1", 00:13:23.809 "uuid": "eace0e5b-43b9-4432-ad23-fd7c0120dd1f", 00:13:23.809 "is_configured": true, 00:13:23.809 "data_offset": 2048, 00:13:23.809 "data_size": 63488 00:13:23.809 }, 00:13:23.809 { 00:13:23.809 "name": "BaseBdev2", 00:13:23.809 "uuid": "ad365b01-b5a8-426b-8a47-00fcf13ca6db", 00:13:23.809 "is_configured": true, 00:13:23.809 "data_offset": 2048, 00:13:23.809 "data_size": 63488 00:13:23.809 }, 00:13:23.809 { 00:13:23.809 "name": "BaseBdev3", 00:13:23.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.809 "is_configured": false, 00:13:23.809 "data_offset": 0, 00:13:23.809 "data_size": 0 00:13:23.809 }, 00:13:23.809 { 00:13:23.809 "name": "BaseBdev4", 00:13:23.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.809 "is_configured": false, 00:13:23.809 "data_offset": 0, 00:13:23.809 "data_size": 0 00:13:23.809 } 00:13:23.809 ] 00:13:23.809 }' 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.809 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.379 [2024-11-15 10:41:54.755727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.379 BaseBdev3 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.379 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.379 [ 00:13:24.379 { 00:13:24.379 "name": "BaseBdev3", 00:13:24.379 "aliases": [ 00:13:24.379 "b9bdf738-9835-46d7-a491-ca589dea94a1" 00:13:24.379 ], 00:13:24.379 "product_name": "Malloc disk", 00:13:24.379 "block_size": 512, 00:13:24.379 "num_blocks": 65536, 00:13:24.379 "uuid": "b9bdf738-9835-46d7-a491-ca589dea94a1", 00:13:24.379 "assigned_rate_limits": { 00:13:24.379 "rw_ios_per_sec": 0, 00:13:24.379 "rw_mbytes_per_sec": 0, 00:13:24.379 "r_mbytes_per_sec": 0, 00:13:24.379 "w_mbytes_per_sec": 0 00:13:24.379 }, 00:13:24.379 "claimed": true, 00:13:24.379 "claim_type": "exclusive_write", 00:13:24.379 "zoned": false, 00:13:24.379 "supported_io_types": { 00:13:24.379 "read": true, 00:13:24.379 "write": true, 00:13:24.379 "unmap": true, 00:13:24.379 "flush": true, 00:13:24.379 "reset": true, 00:13:24.379 "nvme_admin": false, 00:13:24.379 "nvme_io": false, 00:13:24.379 "nvme_io_md": false, 00:13:24.379 "write_zeroes": true, 00:13:24.379 "zcopy": true, 00:13:24.379 "get_zone_info": false, 00:13:24.380 "zone_management": false, 00:13:24.380 "zone_append": false, 00:13:24.380 "compare": false, 00:13:24.380 "compare_and_write": false, 00:13:24.380 "abort": true, 00:13:24.380 "seek_hole": false, 00:13:24.380 "seek_data": false, 00:13:24.380 "copy": true, 00:13:24.380 "nvme_iov_md": false 00:13:24.380 }, 00:13:24.380 "memory_domains": [ 00:13:24.380 { 00:13:24.380 "dma_device_id": "system", 00:13:24.380 "dma_device_type": 1 00:13:24.380 }, 00:13:24.380 { 00:13:24.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.380 "dma_device_type": 2 00:13:24.380 } 00:13:24.380 ], 00:13:24.380 "driver_specific": {} 00:13:24.380 } 00:13:24.380 ] 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.380 "name": "Existed_Raid", 00:13:24.380 "uuid": "34dfa142-1c07-424c-9735-799b52813315", 00:13:24.380 "strip_size_kb": 64, 00:13:24.380 "state": "configuring", 00:13:24.380 "raid_level": "concat", 00:13:24.380 "superblock": true, 00:13:24.380 "num_base_bdevs": 4, 00:13:24.380 "num_base_bdevs_discovered": 3, 00:13:24.380 "num_base_bdevs_operational": 4, 00:13:24.380 "base_bdevs_list": [ 00:13:24.380 { 00:13:24.380 "name": "BaseBdev1", 00:13:24.380 "uuid": "eace0e5b-43b9-4432-ad23-fd7c0120dd1f", 00:13:24.380 "is_configured": true, 00:13:24.380 "data_offset": 2048, 00:13:24.380 "data_size": 63488 00:13:24.380 }, 00:13:24.380 { 00:13:24.380 "name": "BaseBdev2", 00:13:24.380 "uuid": "ad365b01-b5a8-426b-8a47-00fcf13ca6db", 00:13:24.380 "is_configured": true, 00:13:24.380 "data_offset": 2048, 00:13:24.380 "data_size": 63488 00:13:24.380 }, 00:13:24.380 { 00:13:24.380 "name": "BaseBdev3", 00:13:24.380 "uuid": "b9bdf738-9835-46d7-a491-ca589dea94a1", 00:13:24.380 "is_configured": true, 00:13:24.380 "data_offset": 2048, 00:13:24.380 "data_size": 63488 00:13:24.380 }, 00:13:24.380 { 00:13:24.380 "name": "BaseBdev4", 00:13:24.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.380 "is_configured": false, 00:13:24.380 "data_offset": 0, 00:13:24.380 "data_size": 0 00:13:24.380 } 00:13:24.380 ] 00:13:24.380 }' 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.380 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.948 [2024-11-15 10:41:55.338017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:24.948 [2024-11-15 10:41:55.338334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:24.948 [2024-11-15 10:41:55.338381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:24.948 BaseBdev4 00:13:24.948 [2024-11-15 10:41:55.338710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:24.948 [2024-11-15 10:41:55.338900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:24.948 [2024-11-15 10:41:55.338922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:24.948 [2024-11-15 10:41:55.339106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.948 [ 00:13:24.948 { 00:13:24.948 "name": "BaseBdev4", 00:13:24.948 "aliases": [ 00:13:24.948 "998e7b18-9166-46e1-b89b-84c7a513ec9f" 00:13:24.948 ], 00:13:24.948 "product_name": "Malloc disk", 00:13:24.948 "block_size": 512, 00:13:24.948 "num_blocks": 65536, 00:13:24.948 "uuid": "998e7b18-9166-46e1-b89b-84c7a513ec9f", 00:13:24.948 "assigned_rate_limits": { 00:13:24.948 "rw_ios_per_sec": 0, 00:13:24.948 "rw_mbytes_per_sec": 0, 00:13:24.948 "r_mbytes_per_sec": 0, 00:13:24.948 "w_mbytes_per_sec": 0 00:13:24.948 }, 00:13:24.948 "claimed": true, 00:13:24.948 "claim_type": "exclusive_write", 00:13:24.948 "zoned": false, 00:13:24.948 "supported_io_types": { 00:13:24.948 "read": true, 00:13:24.948 "write": true, 00:13:24.948 "unmap": true, 00:13:24.948 "flush": true, 00:13:24.948 "reset": true, 00:13:24.948 "nvme_admin": false, 00:13:24.948 "nvme_io": false, 00:13:24.948 "nvme_io_md": false, 00:13:24.948 "write_zeroes": true, 00:13:24.948 "zcopy": true, 00:13:24.948 "get_zone_info": false, 00:13:24.948 "zone_management": false, 00:13:24.948 "zone_append": false, 00:13:24.948 "compare": false, 00:13:24.948 "compare_and_write": false, 00:13:24.948 "abort": true, 00:13:24.948 "seek_hole": false, 00:13:24.948 "seek_data": false, 00:13:24.948 "copy": true, 00:13:24.948 "nvme_iov_md": false 00:13:24.948 }, 00:13:24.948 "memory_domains": [ 00:13:24.948 { 00:13:24.948 "dma_device_id": "system", 00:13:24.948 "dma_device_type": 1 00:13:24.948 }, 00:13:24.948 { 00:13:24.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.948 "dma_device_type": 2 00:13:24.948 } 00:13:24.948 ], 00:13:24.948 "driver_specific": {} 00:13:24.948 } 00:13:24.948 ] 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:24.948 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.949 "name": "Existed_Raid", 00:13:24.949 "uuid": "34dfa142-1c07-424c-9735-799b52813315", 00:13:24.949 "strip_size_kb": 64, 00:13:24.949 "state": "online", 00:13:24.949 "raid_level": "concat", 00:13:24.949 "superblock": true, 00:13:24.949 "num_base_bdevs": 4, 00:13:24.949 "num_base_bdevs_discovered": 4, 00:13:24.949 "num_base_bdevs_operational": 4, 00:13:24.949 "base_bdevs_list": [ 00:13:24.949 { 00:13:24.949 "name": "BaseBdev1", 00:13:24.949 "uuid": "eace0e5b-43b9-4432-ad23-fd7c0120dd1f", 00:13:24.949 "is_configured": true, 00:13:24.949 "data_offset": 2048, 00:13:24.949 "data_size": 63488 00:13:24.949 }, 00:13:24.949 { 00:13:24.949 "name": "BaseBdev2", 00:13:24.949 "uuid": "ad365b01-b5a8-426b-8a47-00fcf13ca6db", 00:13:24.949 "is_configured": true, 00:13:24.949 "data_offset": 2048, 00:13:24.949 "data_size": 63488 00:13:24.949 }, 00:13:24.949 { 00:13:24.949 "name": "BaseBdev3", 00:13:24.949 "uuid": "b9bdf738-9835-46d7-a491-ca589dea94a1", 00:13:24.949 "is_configured": true, 00:13:24.949 "data_offset": 2048, 00:13:24.949 "data_size": 63488 00:13:24.949 }, 00:13:24.949 { 00:13:24.949 "name": "BaseBdev4", 00:13:24.949 "uuid": "998e7b18-9166-46e1-b89b-84c7a513ec9f", 00:13:24.949 "is_configured": true, 00:13:24.949 "data_offset": 2048, 00:13:24.949 "data_size": 63488 00:13:24.949 } 00:13:24.949 ] 00:13:24.949 }' 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.949 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.517 [2024-11-15 10:41:55.898677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.517 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:25.517 "name": "Existed_Raid", 00:13:25.517 "aliases": [ 00:13:25.517 "34dfa142-1c07-424c-9735-799b52813315" 00:13:25.517 ], 00:13:25.517 "product_name": "Raid Volume", 00:13:25.517 "block_size": 512, 00:13:25.517 "num_blocks": 253952, 00:13:25.517 "uuid": "34dfa142-1c07-424c-9735-799b52813315", 00:13:25.517 "assigned_rate_limits": { 00:13:25.517 "rw_ios_per_sec": 0, 00:13:25.517 "rw_mbytes_per_sec": 0, 00:13:25.517 "r_mbytes_per_sec": 0, 00:13:25.517 "w_mbytes_per_sec": 0 00:13:25.517 }, 00:13:25.517 "claimed": false, 00:13:25.517 "zoned": false, 00:13:25.517 "supported_io_types": { 00:13:25.517 "read": true, 00:13:25.517 "write": true, 00:13:25.517 "unmap": true, 00:13:25.517 "flush": true, 00:13:25.517 "reset": true, 00:13:25.517 "nvme_admin": false, 00:13:25.517 "nvme_io": false, 00:13:25.517 "nvme_io_md": false, 00:13:25.517 "write_zeroes": true, 00:13:25.517 "zcopy": false, 00:13:25.517 "get_zone_info": false, 00:13:25.517 "zone_management": false, 00:13:25.517 "zone_append": false, 00:13:25.517 "compare": false, 00:13:25.517 "compare_and_write": false, 00:13:25.517 "abort": false, 00:13:25.517 "seek_hole": false, 00:13:25.517 "seek_data": false, 00:13:25.517 "copy": false, 00:13:25.517 "nvme_iov_md": false 00:13:25.517 }, 00:13:25.517 "memory_domains": [ 00:13:25.517 { 00:13:25.517 "dma_device_id": "system", 00:13:25.517 "dma_device_type": 1 00:13:25.518 }, 00:13:25.518 { 00:13:25.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.518 "dma_device_type": 2 00:13:25.518 }, 00:13:25.518 { 00:13:25.518 "dma_device_id": "system", 00:13:25.518 "dma_device_type": 1 00:13:25.518 }, 00:13:25.518 { 00:13:25.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.518 "dma_device_type": 2 00:13:25.518 }, 00:13:25.518 { 00:13:25.518 "dma_device_id": "system", 00:13:25.518 "dma_device_type": 1 00:13:25.518 }, 00:13:25.518 { 00:13:25.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.518 "dma_device_type": 2 00:13:25.518 }, 00:13:25.518 { 00:13:25.518 "dma_device_id": "system", 00:13:25.518 "dma_device_type": 1 00:13:25.518 }, 00:13:25.518 { 00:13:25.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.518 "dma_device_type": 2 00:13:25.518 } 00:13:25.518 ], 00:13:25.518 "driver_specific": { 00:13:25.518 "raid": { 00:13:25.518 "uuid": "34dfa142-1c07-424c-9735-799b52813315", 00:13:25.518 "strip_size_kb": 64, 00:13:25.518 "state": "online", 00:13:25.518 "raid_level": "concat", 00:13:25.518 "superblock": true, 00:13:25.518 "num_base_bdevs": 4, 00:13:25.518 "num_base_bdevs_discovered": 4, 00:13:25.518 "num_base_bdevs_operational": 4, 00:13:25.518 "base_bdevs_list": [ 00:13:25.518 { 00:13:25.518 "name": "BaseBdev1", 00:13:25.518 "uuid": "eace0e5b-43b9-4432-ad23-fd7c0120dd1f", 00:13:25.518 "is_configured": true, 00:13:25.518 "data_offset": 2048, 00:13:25.518 "data_size": 63488 00:13:25.518 }, 00:13:25.518 { 00:13:25.518 "name": "BaseBdev2", 00:13:25.518 "uuid": "ad365b01-b5a8-426b-8a47-00fcf13ca6db", 00:13:25.518 "is_configured": true, 00:13:25.518 "data_offset": 2048, 00:13:25.518 "data_size": 63488 00:13:25.518 }, 00:13:25.518 { 00:13:25.518 "name": "BaseBdev3", 00:13:25.518 "uuid": "b9bdf738-9835-46d7-a491-ca589dea94a1", 00:13:25.518 "is_configured": true, 00:13:25.518 "data_offset": 2048, 00:13:25.518 "data_size": 63488 00:13:25.518 }, 00:13:25.518 { 00:13:25.518 "name": "BaseBdev4", 00:13:25.518 "uuid": "998e7b18-9166-46e1-b89b-84c7a513ec9f", 00:13:25.518 "is_configured": true, 00:13:25.518 "data_offset": 2048, 00:13:25.518 "data_size": 63488 00:13:25.518 } 00:13:25.518 ] 00:13:25.518 } 00:13:25.518 } 00:13:25.518 }' 00:13:25.518 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.518 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:25.518 BaseBdev2 00:13:25.518 BaseBdev3 00:13:25.518 BaseBdev4' 00:13:25.518 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.518 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:25.518 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.518 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.518 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:25.518 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.518 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.777 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.777 [2024-11-15 10:41:56.274412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.777 [2024-11-15 10:41:56.274451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.777 [2024-11-15 10:41:56.274516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.069 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.069 "name": "Existed_Raid", 00:13:26.069 "uuid": "34dfa142-1c07-424c-9735-799b52813315", 00:13:26.069 "strip_size_kb": 64, 00:13:26.069 "state": "offline", 00:13:26.069 "raid_level": "concat", 00:13:26.069 "superblock": true, 00:13:26.069 "num_base_bdevs": 4, 00:13:26.069 "num_base_bdevs_discovered": 3, 00:13:26.069 "num_base_bdevs_operational": 3, 00:13:26.069 "base_bdevs_list": [ 00:13:26.069 { 00:13:26.069 "name": null, 00:13:26.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.069 "is_configured": false, 00:13:26.069 "data_offset": 0, 00:13:26.069 "data_size": 63488 00:13:26.069 }, 00:13:26.069 { 00:13:26.069 "name": "BaseBdev2", 00:13:26.069 "uuid": "ad365b01-b5a8-426b-8a47-00fcf13ca6db", 00:13:26.070 "is_configured": true, 00:13:26.070 "data_offset": 2048, 00:13:26.070 "data_size": 63488 00:13:26.070 }, 00:13:26.070 { 00:13:26.070 "name": "BaseBdev3", 00:13:26.070 "uuid": "b9bdf738-9835-46d7-a491-ca589dea94a1", 00:13:26.070 "is_configured": true, 00:13:26.070 "data_offset": 2048, 00:13:26.070 "data_size": 63488 00:13:26.070 }, 00:13:26.070 { 00:13:26.070 "name": "BaseBdev4", 00:13:26.070 "uuid": "998e7b18-9166-46e1-b89b-84c7a513ec9f", 00:13:26.070 "is_configured": true, 00:13:26.070 "data_offset": 2048, 00:13:26.070 "data_size": 63488 00:13:26.070 } 00:13:26.070 ] 00:13:26.070 }' 00:13:26.070 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.070 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.653 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.653 [2024-11-15 10:41:56.970520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.653 [2024-11-15 10:41:57.106500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.653 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.912 [2024-11-15 10:41:57.246299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:26.912 [2024-11-15 10:41:57.246508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.912 BaseBdev2 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.912 [ 00:13:26.912 { 00:13:26.912 "name": "BaseBdev2", 00:13:26.912 "aliases": [ 00:13:26.912 "f937b01b-17e0-4b2d-adf3-2938c5c05f2d" 00:13:26.912 ], 00:13:26.912 "product_name": "Malloc disk", 00:13:26.912 "block_size": 512, 00:13:26.912 "num_blocks": 65536, 00:13:26.912 "uuid": "f937b01b-17e0-4b2d-adf3-2938c5c05f2d", 00:13:26.912 "assigned_rate_limits": { 00:13:26.912 "rw_ios_per_sec": 0, 00:13:26.912 "rw_mbytes_per_sec": 0, 00:13:26.912 "r_mbytes_per_sec": 0, 00:13:26.912 "w_mbytes_per_sec": 0 00:13:26.912 }, 00:13:26.912 "claimed": false, 00:13:26.912 "zoned": false, 00:13:26.912 "supported_io_types": { 00:13:26.912 "read": true, 00:13:26.912 "write": true, 00:13:26.912 "unmap": true, 00:13:26.912 "flush": true, 00:13:26.912 "reset": true, 00:13:26.912 "nvme_admin": false, 00:13:26.912 "nvme_io": false, 00:13:26.912 "nvme_io_md": false, 00:13:26.912 "write_zeroes": true, 00:13:26.912 "zcopy": true, 00:13:26.912 "get_zone_info": false, 00:13:26.912 "zone_management": false, 00:13:26.912 "zone_append": false, 00:13:26.912 "compare": false, 00:13:26.912 "compare_and_write": false, 00:13:26.912 "abort": true, 00:13:26.912 "seek_hole": false, 00:13:26.912 "seek_data": false, 00:13:26.912 "copy": true, 00:13:26.912 "nvme_iov_md": false 00:13:26.912 }, 00:13:26.912 "memory_domains": [ 00:13:26.912 { 00:13:26.912 "dma_device_id": "system", 00:13:26.912 "dma_device_type": 1 00:13:26.912 }, 00:13:26.912 { 00:13:26.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.912 "dma_device_type": 2 00:13:26.912 } 00:13:26.912 ], 00:13:26.912 "driver_specific": {} 00:13:26.912 } 00:13:26.912 ] 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.912 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.171 BaseBdev3 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.171 [ 00:13:27.171 { 00:13:27.171 "name": "BaseBdev3", 00:13:27.171 "aliases": [ 00:13:27.171 "6c182dfb-d19d-486d-9a8e-df081e7e41cd" 00:13:27.171 ], 00:13:27.171 "product_name": "Malloc disk", 00:13:27.171 "block_size": 512, 00:13:27.171 "num_blocks": 65536, 00:13:27.171 "uuid": "6c182dfb-d19d-486d-9a8e-df081e7e41cd", 00:13:27.171 "assigned_rate_limits": { 00:13:27.171 "rw_ios_per_sec": 0, 00:13:27.171 "rw_mbytes_per_sec": 0, 00:13:27.171 "r_mbytes_per_sec": 0, 00:13:27.171 "w_mbytes_per_sec": 0 00:13:27.171 }, 00:13:27.171 "claimed": false, 00:13:27.171 "zoned": false, 00:13:27.171 "supported_io_types": { 00:13:27.171 "read": true, 00:13:27.171 "write": true, 00:13:27.171 "unmap": true, 00:13:27.171 "flush": true, 00:13:27.171 "reset": true, 00:13:27.171 "nvme_admin": false, 00:13:27.171 "nvme_io": false, 00:13:27.171 "nvme_io_md": false, 00:13:27.171 "write_zeroes": true, 00:13:27.171 "zcopy": true, 00:13:27.171 "get_zone_info": false, 00:13:27.171 "zone_management": false, 00:13:27.171 "zone_append": false, 00:13:27.171 "compare": false, 00:13:27.171 "compare_and_write": false, 00:13:27.171 "abort": true, 00:13:27.171 "seek_hole": false, 00:13:27.171 "seek_data": false, 00:13:27.171 "copy": true, 00:13:27.171 "nvme_iov_md": false 00:13:27.171 }, 00:13:27.171 "memory_domains": [ 00:13:27.171 { 00:13:27.171 "dma_device_id": "system", 00:13:27.171 "dma_device_type": 1 00:13:27.171 }, 00:13:27.171 { 00:13:27.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.171 "dma_device_type": 2 00:13:27.171 } 00:13:27.171 ], 00:13:27.171 "driver_specific": {} 00:13:27.171 } 00:13:27.171 ] 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.171 BaseBdev4 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:27.171 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.172 [ 00:13:27.172 { 00:13:27.172 "name": "BaseBdev4", 00:13:27.172 "aliases": [ 00:13:27.172 "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944" 00:13:27.172 ], 00:13:27.172 "product_name": "Malloc disk", 00:13:27.172 "block_size": 512, 00:13:27.172 "num_blocks": 65536, 00:13:27.172 "uuid": "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944", 00:13:27.172 "assigned_rate_limits": { 00:13:27.172 "rw_ios_per_sec": 0, 00:13:27.172 "rw_mbytes_per_sec": 0, 00:13:27.172 "r_mbytes_per_sec": 0, 00:13:27.172 "w_mbytes_per_sec": 0 00:13:27.172 }, 00:13:27.172 "claimed": false, 00:13:27.172 "zoned": false, 00:13:27.172 "supported_io_types": { 00:13:27.172 "read": true, 00:13:27.172 "write": true, 00:13:27.172 "unmap": true, 00:13:27.172 "flush": true, 00:13:27.172 "reset": true, 00:13:27.172 "nvme_admin": false, 00:13:27.172 "nvme_io": false, 00:13:27.172 "nvme_io_md": false, 00:13:27.172 "write_zeroes": true, 00:13:27.172 "zcopy": true, 00:13:27.172 "get_zone_info": false, 00:13:27.172 "zone_management": false, 00:13:27.172 "zone_append": false, 00:13:27.172 "compare": false, 00:13:27.172 "compare_and_write": false, 00:13:27.172 "abort": true, 00:13:27.172 "seek_hole": false, 00:13:27.172 "seek_data": false, 00:13:27.172 "copy": true, 00:13:27.172 "nvme_iov_md": false 00:13:27.172 }, 00:13:27.172 "memory_domains": [ 00:13:27.172 { 00:13:27.172 "dma_device_id": "system", 00:13:27.172 "dma_device_type": 1 00:13:27.172 }, 00:13:27.172 { 00:13:27.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.172 "dma_device_type": 2 00:13:27.172 } 00:13:27.172 ], 00:13:27.172 "driver_specific": {} 00:13:27.172 } 00:13:27.172 ] 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.172 [2024-11-15 10:41:57.602306] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:27.172 [2024-11-15 10:41:57.602506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:27.172 [2024-11-15 10:41:57.602555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.172 [2024-11-15 10:41:57.604852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.172 [2024-11-15 10:41:57.604925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.172 "name": "Existed_Raid", 00:13:27.172 "uuid": "a254c6cc-b020-434f-bed7-c8be92f91cd9", 00:13:27.172 "strip_size_kb": 64, 00:13:27.172 "state": "configuring", 00:13:27.172 "raid_level": "concat", 00:13:27.172 "superblock": true, 00:13:27.172 "num_base_bdevs": 4, 00:13:27.172 "num_base_bdevs_discovered": 3, 00:13:27.172 "num_base_bdevs_operational": 4, 00:13:27.172 "base_bdevs_list": [ 00:13:27.172 { 00:13:27.172 "name": "BaseBdev1", 00:13:27.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.172 "is_configured": false, 00:13:27.172 "data_offset": 0, 00:13:27.172 "data_size": 0 00:13:27.172 }, 00:13:27.172 { 00:13:27.172 "name": "BaseBdev2", 00:13:27.172 "uuid": "f937b01b-17e0-4b2d-adf3-2938c5c05f2d", 00:13:27.172 "is_configured": true, 00:13:27.172 "data_offset": 2048, 00:13:27.172 "data_size": 63488 00:13:27.172 }, 00:13:27.172 { 00:13:27.172 "name": "BaseBdev3", 00:13:27.172 "uuid": "6c182dfb-d19d-486d-9a8e-df081e7e41cd", 00:13:27.172 "is_configured": true, 00:13:27.172 "data_offset": 2048, 00:13:27.172 "data_size": 63488 00:13:27.172 }, 00:13:27.172 { 00:13:27.172 "name": "BaseBdev4", 00:13:27.172 "uuid": "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944", 00:13:27.172 "is_configured": true, 00:13:27.172 "data_offset": 2048, 00:13:27.172 "data_size": 63488 00:13:27.172 } 00:13:27.172 ] 00:13:27.172 }' 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.172 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.740 [2024-11-15 10:41:58.126431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.740 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.741 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.741 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.741 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.741 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.741 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.741 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.741 "name": "Existed_Raid", 00:13:27.741 "uuid": "a254c6cc-b020-434f-bed7-c8be92f91cd9", 00:13:27.741 "strip_size_kb": 64, 00:13:27.741 "state": "configuring", 00:13:27.741 "raid_level": "concat", 00:13:27.741 "superblock": true, 00:13:27.741 "num_base_bdevs": 4, 00:13:27.741 "num_base_bdevs_discovered": 2, 00:13:27.741 "num_base_bdevs_operational": 4, 00:13:27.741 "base_bdevs_list": [ 00:13:27.741 { 00:13:27.741 "name": "BaseBdev1", 00:13:27.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.741 "is_configured": false, 00:13:27.741 "data_offset": 0, 00:13:27.741 "data_size": 0 00:13:27.741 }, 00:13:27.741 { 00:13:27.741 "name": null, 00:13:27.741 "uuid": "f937b01b-17e0-4b2d-adf3-2938c5c05f2d", 00:13:27.741 "is_configured": false, 00:13:27.741 "data_offset": 0, 00:13:27.741 "data_size": 63488 00:13:27.741 }, 00:13:27.741 { 00:13:27.741 "name": "BaseBdev3", 00:13:27.741 "uuid": "6c182dfb-d19d-486d-9a8e-df081e7e41cd", 00:13:27.741 "is_configured": true, 00:13:27.741 "data_offset": 2048, 00:13:27.741 "data_size": 63488 00:13:27.741 }, 00:13:27.741 { 00:13:27.741 "name": "BaseBdev4", 00:13:27.741 "uuid": "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944", 00:13:27.741 "is_configured": true, 00:13:27.741 "data_offset": 2048, 00:13:27.741 "data_size": 63488 00:13:27.741 } 00:13:27.741 ] 00:13:27.741 }' 00:13:27.741 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.741 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.308 [2024-11-15 10:41:58.716065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.308 BaseBdev1 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.308 [ 00:13:28.308 { 00:13:28.308 "name": "BaseBdev1", 00:13:28.308 "aliases": [ 00:13:28.308 "c15290a5-89a7-44cc-b15f-032c963feb92" 00:13:28.308 ], 00:13:28.308 "product_name": "Malloc disk", 00:13:28.308 "block_size": 512, 00:13:28.308 "num_blocks": 65536, 00:13:28.308 "uuid": "c15290a5-89a7-44cc-b15f-032c963feb92", 00:13:28.308 "assigned_rate_limits": { 00:13:28.308 "rw_ios_per_sec": 0, 00:13:28.308 "rw_mbytes_per_sec": 0, 00:13:28.308 "r_mbytes_per_sec": 0, 00:13:28.308 "w_mbytes_per_sec": 0 00:13:28.308 }, 00:13:28.308 "claimed": true, 00:13:28.308 "claim_type": "exclusive_write", 00:13:28.308 "zoned": false, 00:13:28.308 "supported_io_types": { 00:13:28.308 "read": true, 00:13:28.308 "write": true, 00:13:28.308 "unmap": true, 00:13:28.308 "flush": true, 00:13:28.308 "reset": true, 00:13:28.308 "nvme_admin": false, 00:13:28.308 "nvme_io": false, 00:13:28.308 "nvme_io_md": false, 00:13:28.308 "write_zeroes": true, 00:13:28.308 "zcopy": true, 00:13:28.308 "get_zone_info": false, 00:13:28.308 "zone_management": false, 00:13:28.308 "zone_append": false, 00:13:28.308 "compare": false, 00:13:28.308 "compare_and_write": false, 00:13:28.308 "abort": true, 00:13:28.308 "seek_hole": false, 00:13:28.308 "seek_data": false, 00:13:28.308 "copy": true, 00:13:28.308 "nvme_iov_md": false 00:13:28.308 }, 00:13:28.308 "memory_domains": [ 00:13:28.308 { 00:13:28.308 "dma_device_id": "system", 00:13:28.308 "dma_device_type": 1 00:13:28.308 }, 00:13:28.308 { 00:13:28.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.308 "dma_device_type": 2 00:13:28.308 } 00:13:28.308 ], 00:13:28.308 "driver_specific": {} 00:13:28.308 } 00:13:28.308 ] 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.308 "name": "Existed_Raid", 00:13:28.308 "uuid": "a254c6cc-b020-434f-bed7-c8be92f91cd9", 00:13:28.308 "strip_size_kb": 64, 00:13:28.308 "state": "configuring", 00:13:28.308 "raid_level": "concat", 00:13:28.308 "superblock": true, 00:13:28.308 "num_base_bdevs": 4, 00:13:28.308 "num_base_bdevs_discovered": 3, 00:13:28.308 "num_base_bdevs_operational": 4, 00:13:28.308 "base_bdevs_list": [ 00:13:28.308 { 00:13:28.308 "name": "BaseBdev1", 00:13:28.308 "uuid": "c15290a5-89a7-44cc-b15f-032c963feb92", 00:13:28.308 "is_configured": true, 00:13:28.308 "data_offset": 2048, 00:13:28.308 "data_size": 63488 00:13:28.308 }, 00:13:28.308 { 00:13:28.308 "name": null, 00:13:28.308 "uuid": "f937b01b-17e0-4b2d-adf3-2938c5c05f2d", 00:13:28.308 "is_configured": false, 00:13:28.308 "data_offset": 0, 00:13:28.308 "data_size": 63488 00:13:28.308 }, 00:13:28.308 { 00:13:28.308 "name": "BaseBdev3", 00:13:28.308 "uuid": "6c182dfb-d19d-486d-9a8e-df081e7e41cd", 00:13:28.308 "is_configured": true, 00:13:28.308 "data_offset": 2048, 00:13:28.308 "data_size": 63488 00:13:28.308 }, 00:13:28.308 { 00:13:28.308 "name": "BaseBdev4", 00:13:28.308 "uuid": "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944", 00:13:28.308 "is_configured": true, 00:13:28.308 "data_offset": 2048, 00:13:28.308 "data_size": 63488 00:13:28.308 } 00:13:28.308 ] 00:13:28.308 }' 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.308 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.876 [2024-11-15 10:41:59.320340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.876 "name": "Existed_Raid", 00:13:28.876 "uuid": "a254c6cc-b020-434f-bed7-c8be92f91cd9", 00:13:28.876 "strip_size_kb": 64, 00:13:28.876 "state": "configuring", 00:13:28.876 "raid_level": "concat", 00:13:28.876 "superblock": true, 00:13:28.876 "num_base_bdevs": 4, 00:13:28.876 "num_base_bdevs_discovered": 2, 00:13:28.876 "num_base_bdevs_operational": 4, 00:13:28.876 "base_bdevs_list": [ 00:13:28.876 { 00:13:28.876 "name": "BaseBdev1", 00:13:28.876 "uuid": "c15290a5-89a7-44cc-b15f-032c963feb92", 00:13:28.876 "is_configured": true, 00:13:28.876 "data_offset": 2048, 00:13:28.876 "data_size": 63488 00:13:28.876 }, 00:13:28.876 { 00:13:28.876 "name": null, 00:13:28.876 "uuid": "f937b01b-17e0-4b2d-adf3-2938c5c05f2d", 00:13:28.876 "is_configured": false, 00:13:28.876 "data_offset": 0, 00:13:28.876 "data_size": 63488 00:13:28.876 }, 00:13:28.876 { 00:13:28.876 "name": null, 00:13:28.876 "uuid": "6c182dfb-d19d-486d-9a8e-df081e7e41cd", 00:13:28.876 "is_configured": false, 00:13:28.876 "data_offset": 0, 00:13:28.876 "data_size": 63488 00:13:28.876 }, 00:13:28.876 { 00:13:28.876 "name": "BaseBdev4", 00:13:28.876 "uuid": "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944", 00:13:28.876 "is_configured": true, 00:13:28.876 "data_offset": 2048, 00:13:28.876 "data_size": 63488 00:13:28.876 } 00:13:28.876 ] 00:13:28.876 }' 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.876 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.444 [2024-11-15 10:41:59.900522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.444 "name": "Existed_Raid", 00:13:29.444 "uuid": "a254c6cc-b020-434f-bed7-c8be92f91cd9", 00:13:29.444 "strip_size_kb": 64, 00:13:29.444 "state": "configuring", 00:13:29.444 "raid_level": "concat", 00:13:29.444 "superblock": true, 00:13:29.444 "num_base_bdevs": 4, 00:13:29.444 "num_base_bdevs_discovered": 3, 00:13:29.444 "num_base_bdevs_operational": 4, 00:13:29.444 "base_bdevs_list": [ 00:13:29.444 { 00:13:29.444 "name": "BaseBdev1", 00:13:29.444 "uuid": "c15290a5-89a7-44cc-b15f-032c963feb92", 00:13:29.444 "is_configured": true, 00:13:29.444 "data_offset": 2048, 00:13:29.444 "data_size": 63488 00:13:29.444 }, 00:13:29.444 { 00:13:29.444 "name": null, 00:13:29.444 "uuid": "f937b01b-17e0-4b2d-adf3-2938c5c05f2d", 00:13:29.444 "is_configured": false, 00:13:29.444 "data_offset": 0, 00:13:29.444 "data_size": 63488 00:13:29.444 }, 00:13:29.444 { 00:13:29.444 "name": "BaseBdev3", 00:13:29.444 "uuid": "6c182dfb-d19d-486d-9a8e-df081e7e41cd", 00:13:29.444 "is_configured": true, 00:13:29.444 "data_offset": 2048, 00:13:29.444 "data_size": 63488 00:13:29.444 }, 00:13:29.444 { 00:13:29.444 "name": "BaseBdev4", 00:13:29.444 "uuid": "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944", 00:13:29.444 "is_configured": true, 00:13:29.444 "data_offset": 2048, 00:13:29.444 "data_size": 63488 00:13:29.444 } 00:13:29.444 ] 00:13:29.444 }' 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.444 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.011 [2024-11-15 10:42:00.480777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.011 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.269 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.269 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.269 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.269 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.269 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.269 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.269 "name": "Existed_Raid", 00:13:30.269 "uuid": "a254c6cc-b020-434f-bed7-c8be92f91cd9", 00:13:30.269 "strip_size_kb": 64, 00:13:30.270 "state": "configuring", 00:13:30.270 "raid_level": "concat", 00:13:30.270 "superblock": true, 00:13:30.270 "num_base_bdevs": 4, 00:13:30.270 "num_base_bdevs_discovered": 2, 00:13:30.270 "num_base_bdevs_operational": 4, 00:13:30.270 "base_bdevs_list": [ 00:13:30.270 { 00:13:30.270 "name": null, 00:13:30.270 "uuid": "c15290a5-89a7-44cc-b15f-032c963feb92", 00:13:30.270 "is_configured": false, 00:13:30.270 "data_offset": 0, 00:13:30.270 "data_size": 63488 00:13:30.270 }, 00:13:30.270 { 00:13:30.270 "name": null, 00:13:30.270 "uuid": "f937b01b-17e0-4b2d-adf3-2938c5c05f2d", 00:13:30.270 "is_configured": false, 00:13:30.270 "data_offset": 0, 00:13:30.270 "data_size": 63488 00:13:30.270 }, 00:13:30.270 { 00:13:30.270 "name": "BaseBdev3", 00:13:30.270 "uuid": "6c182dfb-d19d-486d-9a8e-df081e7e41cd", 00:13:30.270 "is_configured": true, 00:13:30.270 "data_offset": 2048, 00:13:30.270 "data_size": 63488 00:13:30.270 }, 00:13:30.270 { 00:13:30.270 "name": "BaseBdev4", 00:13:30.270 "uuid": "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944", 00:13:30.270 "is_configured": true, 00:13:30.270 "data_offset": 2048, 00:13:30.270 "data_size": 63488 00:13:30.270 } 00:13:30.270 ] 00:13:30.270 }' 00:13:30.270 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.270 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.833 [2024-11-15 10:42:01.179844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.833 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.834 "name": "Existed_Raid", 00:13:30.834 "uuid": "a254c6cc-b020-434f-bed7-c8be92f91cd9", 00:13:30.834 "strip_size_kb": 64, 00:13:30.834 "state": "configuring", 00:13:30.834 "raid_level": "concat", 00:13:30.834 "superblock": true, 00:13:30.834 "num_base_bdevs": 4, 00:13:30.834 "num_base_bdevs_discovered": 3, 00:13:30.834 "num_base_bdevs_operational": 4, 00:13:30.834 "base_bdevs_list": [ 00:13:30.834 { 00:13:30.834 "name": null, 00:13:30.834 "uuid": "c15290a5-89a7-44cc-b15f-032c963feb92", 00:13:30.834 "is_configured": false, 00:13:30.834 "data_offset": 0, 00:13:30.834 "data_size": 63488 00:13:30.834 }, 00:13:30.834 { 00:13:30.834 "name": "BaseBdev2", 00:13:30.834 "uuid": "f937b01b-17e0-4b2d-adf3-2938c5c05f2d", 00:13:30.834 "is_configured": true, 00:13:30.834 "data_offset": 2048, 00:13:30.834 "data_size": 63488 00:13:30.834 }, 00:13:30.834 { 00:13:30.834 "name": "BaseBdev3", 00:13:30.834 "uuid": "6c182dfb-d19d-486d-9a8e-df081e7e41cd", 00:13:30.834 "is_configured": true, 00:13:30.834 "data_offset": 2048, 00:13:30.834 "data_size": 63488 00:13:30.834 }, 00:13:30.834 { 00:13:30.834 "name": "BaseBdev4", 00:13:30.834 "uuid": "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944", 00:13:30.834 "is_configured": true, 00:13:30.834 "data_offset": 2048, 00:13:30.834 "data_size": 63488 00:13:30.834 } 00:13:30.834 ] 00:13:30.834 }' 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.834 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c15290a5-89a7-44cc-b15f-032c963feb92 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 [2024-11-15 10:42:01.841036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:31.411 [2024-11-15 10:42:01.841322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:31.411 [2024-11-15 10:42:01.841341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:31.411 [2024-11-15 10:42:01.841704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:31.411 NewBaseBdev 00:13:31.411 [2024-11-15 10:42:01.842064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:31.411 [2024-11-15 10:42:01.842197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, ra 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.411 id_bdev 0x617000008200 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:31.411 [2024-11-15 10:42:01.842571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.411 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 [ 00:13:31.411 { 00:13:31.411 "name": "NewBaseBdev", 00:13:31.411 "aliases": [ 00:13:31.411 "c15290a5-89a7-44cc-b15f-032c963feb92" 00:13:31.411 ], 00:13:31.411 "product_name": "Malloc disk", 00:13:31.412 "block_size": 512, 00:13:31.412 "num_blocks": 65536, 00:13:31.412 "uuid": "c15290a5-89a7-44cc-b15f-032c963feb92", 00:13:31.412 "assigned_rate_limits": { 00:13:31.412 "rw_ios_per_sec": 0, 00:13:31.412 "rw_mbytes_per_sec": 0, 00:13:31.412 "r_mbytes_per_sec": 0, 00:13:31.412 "w_mbytes_per_sec": 0 00:13:31.412 }, 00:13:31.412 "claimed": true, 00:13:31.412 "claim_type": "exclusive_write", 00:13:31.412 "zoned": false, 00:13:31.412 "supported_io_types": { 00:13:31.412 "read": true, 00:13:31.412 "write": true, 00:13:31.412 "unmap": true, 00:13:31.412 "flush": true, 00:13:31.412 "reset": true, 00:13:31.412 "nvme_admin": false, 00:13:31.412 "nvme_io": false, 00:13:31.412 "nvme_io_md": false, 00:13:31.412 "write_zeroes": true, 00:13:31.412 "zcopy": true, 00:13:31.412 "get_zone_info": false, 00:13:31.412 "zone_management": false, 00:13:31.412 "zone_append": false, 00:13:31.412 "compare": false, 00:13:31.412 "compare_and_write": false, 00:13:31.412 "abort": true, 00:13:31.412 "seek_hole": false, 00:13:31.412 "seek_data": false, 00:13:31.412 "copy": true, 00:13:31.412 "nvme_iov_md": false 00:13:31.412 }, 00:13:31.412 "memory_domains": [ 00:13:31.412 { 00:13:31.412 "dma_device_id": "system", 00:13:31.412 "dma_device_type": 1 00:13:31.412 }, 00:13:31.412 { 00:13:31.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.412 "dma_device_type": 2 00:13:31.412 } 00:13:31.412 ], 00:13:31.412 "driver_specific": {} 00:13:31.412 } 00:13:31.412 ] 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.412 "name": "Existed_Raid", 00:13:31.412 "uuid": "a254c6cc-b020-434f-bed7-c8be92f91cd9", 00:13:31.412 "strip_size_kb": 64, 00:13:31.412 "state": "online", 00:13:31.412 "raid_level": "concat", 00:13:31.412 "superblock": true, 00:13:31.412 "num_base_bdevs": 4, 00:13:31.412 "num_base_bdevs_discovered": 4, 00:13:31.412 "num_base_bdevs_operational": 4, 00:13:31.412 "base_bdevs_list": [ 00:13:31.412 { 00:13:31.412 "name": "NewBaseBdev", 00:13:31.412 "uuid": "c15290a5-89a7-44cc-b15f-032c963feb92", 00:13:31.412 "is_configured": true, 00:13:31.412 "data_offset": 2048, 00:13:31.412 "data_size": 63488 00:13:31.412 }, 00:13:31.412 { 00:13:31.412 "name": "BaseBdev2", 00:13:31.412 "uuid": "f937b01b-17e0-4b2d-adf3-2938c5c05f2d", 00:13:31.412 "is_configured": true, 00:13:31.412 "data_offset": 2048, 00:13:31.412 "data_size": 63488 00:13:31.412 }, 00:13:31.412 { 00:13:31.412 "name": "BaseBdev3", 00:13:31.412 "uuid": "6c182dfb-d19d-486d-9a8e-df081e7e41cd", 00:13:31.412 "is_configured": true, 00:13:31.412 "data_offset": 2048, 00:13:31.412 "data_size": 63488 00:13:31.412 }, 00:13:31.412 { 00:13:31.412 "name": "BaseBdev4", 00:13:31.412 "uuid": "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944", 00:13:31.412 "is_configured": true, 00:13:31.412 "data_offset": 2048, 00:13:31.412 "data_size": 63488 00:13:31.412 } 00:13:31.412 ] 00:13:31.412 }' 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.412 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.979 [2024-11-15 10:42:02.397723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.979 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:31.979 "name": "Existed_Raid", 00:13:31.979 "aliases": [ 00:13:31.979 "a254c6cc-b020-434f-bed7-c8be92f91cd9" 00:13:31.979 ], 00:13:31.979 "product_name": "Raid Volume", 00:13:31.979 "block_size": 512, 00:13:31.979 "num_blocks": 253952, 00:13:31.979 "uuid": "a254c6cc-b020-434f-bed7-c8be92f91cd9", 00:13:31.979 "assigned_rate_limits": { 00:13:31.979 "rw_ios_per_sec": 0, 00:13:31.979 "rw_mbytes_per_sec": 0, 00:13:31.979 "r_mbytes_per_sec": 0, 00:13:31.979 "w_mbytes_per_sec": 0 00:13:31.979 }, 00:13:31.979 "claimed": false, 00:13:31.979 "zoned": false, 00:13:31.979 "supported_io_types": { 00:13:31.979 "read": true, 00:13:31.979 "write": true, 00:13:31.979 "unmap": true, 00:13:31.979 "flush": true, 00:13:31.979 "reset": true, 00:13:31.979 "nvme_admin": false, 00:13:31.979 "nvme_io": false, 00:13:31.979 "nvme_io_md": false, 00:13:31.979 "write_zeroes": true, 00:13:31.979 "zcopy": false, 00:13:31.979 "get_zone_info": false, 00:13:31.979 "zone_management": false, 00:13:31.979 "zone_append": false, 00:13:31.979 "compare": false, 00:13:31.979 "compare_and_write": false, 00:13:31.979 "abort": false, 00:13:31.979 "seek_hole": false, 00:13:31.979 "seek_data": false, 00:13:31.979 "copy": false, 00:13:31.979 "nvme_iov_md": false 00:13:31.979 }, 00:13:31.979 "memory_domains": [ 00:13:31.979 { 00:13:31.979 "dma_device_id": "system", 00:13:31.979 "dma_device_type": 1 00:13:31.979 }, 00:13:31.979 { 00:13:31.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.979 "dma_device_type": 2 00:13:31.979 }, 00:13:31.979 { 00:13:31.979 "dma_device_id": "system", 00:13:31.979 "dma_device_type": 1 00:13:31.979 }, 00:13:31.979 { 00:13:31.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.979 "dma_device_type": 2 00:13:31.979 }, 00:13:31.979 { 00:13:31.979 "dma_device_id": "system", 00:13:31.979 "dma_device_type": 1 00:13:31.979 }, 00:13:31.979 { 00:13:31.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.979 "dma_device_type": 2 00:13:31.979 }, 00:13:31.979 { 00:13:31.979 "dma_device_id": "system", 00:13:31.979 "dma_device_type": 1 00:13:31.979 }, 00:13:31.979 { 00:13:31.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.979 "dma_device_type": 2 00:13:31.979 } 00:13:31.979 ], 00:13:31.979 "driver_specific": { 00:13:31.979 "raid": { 00:13:31.979 "uuid": "a254c6cc-b020-434f-bed7-c8be92f91cd9", 00:13:31.979 "strip_size_kb": 64, 00:13:31.979 "state": "online", 00:13:31.979 "raid_level": "concat", 00:13:31.979 "superblock": true, 00:13:31.979 "num_base_bdevs": 4, 00:13:31.979 "num_base_bdevs_discovered": 4, 00:13:31.979 "num_base_bdevs_operational": 4, 00:13:31.979 "base_bdevs_list": [ 00:13:31.979 { 00:13:31.979 "name": "NewBaseBdev", 00:13:31.979 "uuid": "c15290a5-89a7-44cc-b15f-032c963feb92", 00:13:31.979 "is_configured": true, 00:13:31.979 "data_offset": 2048, 00:13:31.979 "data_size": 63488 00:13:31.979 }, 00:13:31.979 { 00:13:31.979 "name": "BaseBdev2", 00:13:31.979 "uuid": "f937b01b-17e0-4b2d-adf3-2938c5c05f2d", 00:13:31.979 "is_configured": true, 00:13:31.979 "data_offset": 2048, 00:13:31.979 "data_size": 63488 00:13:31.979 }, 00:13:31.979 { 00:13:31.979 "name": "BaseBdev3", 00:13:31.980 "uuid": "6c182dfb-d19d-486d-9a8e-df081e7e41cd", 00:13:31.980 "is_configured": true, 00:13:31.980 "data_offset": 2048, 00:13:31.980 "data_size": 63488 00:13:31.980 }, 00:13:31.980 { 00:13:31.980 "name": "BaseBdev4", 00:13:31.980 "uuid": "5a4fdf1c-7035-4e8e-8d5b-7aa56bda4944", 00:13:31.980 "is_configured": true, 00:13:31.980 "data_offset": 2048, 00:13:31.980 "data_size": 63488 00:13:31.980 } 00:13:31.980 ] 00:13:31.980 } 00:13:31.980 } 00:13:31.980 }' 00:13:31.980 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:31.980 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:31.980 BaseBdev2 00:13:31.980 BaseBdev3 00:13:31.980 BaseBdev4' 00:13:31.980 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.239 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.498 [2024-11-15 10:42:02.797376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:32.498 [2024-11-15 10:42:02.797421] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.498 [2024-11-15 10:42:02.797515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.498 [2024-11-15 10:42:02.797603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.499 [2024-11-15 10:42:02.797620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72253 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72253 ']' 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72253 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72253 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:32.499 killing process with pid 72253 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72253' 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72253 00:13:32.499 [2024-11-15 10:42:02.835685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.499 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72253 00:13:32.757 [2024-11-15 10:42:03.170955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.692 10:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:33.692 00:13:33.692 real 0m12.791s 00:13:33.692 user 0m21.456s 00:13:33.692 sys 0m1.675s 00:13:33.692 10:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:33.692 10:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.692 ************************************ 00:13:33.692 END TEST raid_state_function_test_sb 00:13:33.692 ************************************ 00:13:33.692 10:42:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:33.692 10:42:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:33.692 10:42:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:33.692 10:42:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.692 ************************************ 00:13:33.692 START TEST raid_superblock_test 00:13:33.692 ************************************ 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:33.692 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:33.693 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:33.693 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72935 00:13:33.693 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72935 00:13:33.693 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:33.693 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72935 ']' 00:13:33.693 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.693 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:33.693 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.693 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:33.693 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.952 [2024-11-15 10:42:04.317121] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:13:33.952 [2024-11-15 10:42:04.317336] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72935 ] 00:13:33.952 [2024-11-15 10:42:04.503404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.210 [2024-11-15 10:42:04.629167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.468 [2024-11-15 10:42:04.845642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.468 [2024-11-15 10:42:04.845687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 malloc1 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 [2024-11-15 10:42:05.279054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:34.727 [2024-11-15 10:42:05.279124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.727 [2024-11-15 10:42:05.279155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.727 [2024-11-15 10:42:05.279171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.727 [2024-11-15 10:42:05.281801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.727 [2024-11-15 10:42:05.281848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:34.727 pt1 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.727 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 malloc2 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 [2024-11-15 10:42:05.326336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:34.987 [2024-11-15 10:42:05.326422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.987 [2024-11-15 10:42:05.326457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:34.987 [2024-11-15 10:42:05.326471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.987 [2024-11-15 10:42:05.329128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.987 [2024-11-15 10:42:05.329175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:34.987 pt2 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 malloc3 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.987 [2024-11-15 10:42:05.388117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:34.987 [2024-11-15 10:42:05.388181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.987 [2024-11-15 10:42:05.388212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:34.987 [2024-11-15 10:42:05.388227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.987 [2024-11-15 10:42:05.390918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.987 [2024-11-15 10:42:05.390962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:34.987 pt3 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:34.987 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.988 malloc4 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.988 [2024-11-15 10:42:05.436269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:34.988 [2024-11-15 10:42:05.436339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.988 [2024-11-15 10:42:05.436387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:34.988 [2024-11-15 10:42:05.436403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.988 [2024-11-15 10:42:05.438913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.988 [2024-11-15 10:42:05.438958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:34.988 pt4 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.988 [2024-11-15 10:42:05.444248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:34.988 [2024-11-15 10:42:05.446473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:34.988 [2024-11-15 10:42:05.446594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:34.988 [2024-11-15 10:42:05.446671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:34.988 [2024-11-15 10:42:05.446911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:34.988 [2024-11-15 10:42:05.446930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:34.988 [2024-11-15 10:42:05.447261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:34.988 [2024-11-15 10:42:05.447492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:34.988 [2024-11-15 10:42:05.447514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:34.988 [2024-11-15 10:42:05.447689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.988 "name": "raid_bdev1", 00:13:34.988 "uuid": "d1627da1-c30a-45d7-9ab7-b2c00a7e17f0", 00:13:34.988 "strip_size_kb": 64, 00:13:34.988 "state": "online", 00:13:34.988 "raid_level": "concat", 00:13:34.988 "superblock": true, 00:13:34.988 "num_base_bdevs": 4, 00:13:34.988 "num_base_bdevs_discovered": 4, 00:13:34.988 "num_base_bdevs_operational": 4, 00:13:34.988 "base_bdevs_list": [ 00:13:34.988 { 00:13:34.988 "name": "pt1", 00:13:34.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.988 "is_configured": true, 00:13:34.988 "data_offset": 2048, 00:13:34.988 "data_size": 63488 00:13:34.988 }, 00:13:34.988 { 00:13:34.988 "name": "pt2", 00:13:34.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.988 "is_configured": true, 00:13:34.988 "data_offset": 2048, 00:13:34.988 "data_size": 63488 00:13:34.988 }, 00:13:34.988 { 00:13:34.988 "name": "pt3", 00:13:34.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.988 "is_configured": true, 00:13:34.988 "data_offset": 2048, 00:13:34.988 "data_size": 63488 00:13:34.988 }, 00:13:34.988 { 00:13:34.988 "name": "pt4", 00:13:34.988 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:34.988 "is_configured": true, 00:13:34.988 "data_offset": 2048, 00:13:34.988 "data_size": 63488 00:13:34.988 } 00:13:34.988 ] 00:13:34.988 }' 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.988 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.558 [2024-11-15 10:42:05.964778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.558 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.558 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.558 "name": "raid_bdev1", 00:13:35.558 "aliases": [ 00:13:35.558 "d1627da1-c30a-45d7-9ab7-b2c00a7e17f0" 00:13:35.558 ], 00:13:35.558 "product_name": "Raid Volume", 00:13:35.558 "block_size": 512, 00:13:35.558 "num_blocks": 253952, 00:13:35.558 "uuid": "d1627da1-c30a-45d7-9ab7-b2c00a7e17f0", 00:13:35.558 "assigned_rate_limits": { 00:13:35.558 "rw_ios_per_sec": 0, 00:13:35.558 "rw_mbytes_per_sec": 0, 00:13:35.558 "r_mbytes_per_sec": 0, 00:13:35.558 "w_mbytes_per_sec": 0 00:13:35.558 }, 00:13:35.558 "claimed": false, 00:13:35.558 "zoned": false, 00:13:35.558 "supported_io_types": { 00:13:35.558 "read": true, 00:13:35.558 "write": true, 00:13:35.558 "unmap": true, 00:13:35.558 "flush": true, 00:13:35.558 "reset": true, 00:13:35.558 "nvme_admin": false, 00:13:35.558 "nvme_io": false, 00:13:35.558 "nvme_io_md": false, 00:13:35.558 "write_zeroes": true, 00:13:35.558 "zcopy": false, 00:13:35.558 "get_zone_info": false, 00:13:35.558 "zone_management": false, 00:13:35.558 "zone_append": false, 00:13:35.558 "compare": false, 00:13:35.558 "compare_and_write": false, 00:13:35.558 "abort": false, 00:13:35.558 "seek_hole": false, 00:13:35.558 "seek_data": false, 00:13:35.558 "copy": false, 00:13:35.558 "nvme_iov_md": false 00:13:35.558 }, 00:13:35.558 "memory_domains": [ 00:13:35.558 { 00:13:35.558 "dma_device_id": "system", 00:13:35.558 "dma_device_type": 1 00:13:35.558 }, 00:13:35.558 { 00:13:35.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.558 "dma_device_type": 2 00:13:35.558 }, 00:13:35.558 { 00:13:35.558 "dma_device_id": "system", 00:13:35.558 "dma_device_type": 1 00:13:35.558 }, 00:13:35.558 { 00:13:35.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.558 "dma_device_type": 2 00:13:35.558 }, 00:13:35.558 { 00:13:35.558 "dma_device_id": "system", 00:13:35.558 "dma_device_type": 1 00:13:35.558 }, 00:13:35.558 { 00:13:35.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.558 "dma_device_type": 2 00:13:35.558 }, 00:13:35.558 { 00:13:35.558 "dma_device_id": "system", 00:13:35.558 "dma_device_type": 1 00:13:35.558 }, 00:13:35.558 { 00:13:35.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.558 "dma_device_type": 2 00:13:35.558 } 00:13:35.558 ], 00:13:35.558 "driver_specific": { 00:13:35.558 "raid": { 00:13:35.558 "uuid": "d1627da1-c30a-45d7-9ab7-b2c00a7e17f0", 00:13:35.558 "strip_size_kb": 64, 00:13:35.558 "state": "online", 00:13:35.558 "raid_level": "concat", 00:13:35.558 "superblock": true, 00:13:35.558 "num_base_bdevs": 4, 00:13:35.558 "num_base_bdevs_discovered": 4, 00:13:35.558 "num_base_bdevs_operational": 4, 00:13:35.558 "base_bdevs_list": [ 00:13:35.558 { 00:13:35.558 "name": "pt1", 00:13:35.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.558 "is_configured": true, 00:13:35.558 "data_offset": 2048, 00:13:35.558 "data_size": 63488 00:13:35.558 }, 00:13:35.558 { 00:13:35.558 "name": "pt2", 00:13:35.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.558 "is_configured": true, 00:13:35.558 "data_offset": 2048, 00:13:35.558 "data_size": 63488 00:13:35.558 }, 00:13:35.558 { 00:13:35.558 "name": "pt3", 00:13:35.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.558 "is_configured": true, 00:13:35.558 "data_offset": 2048, 00:13:35.558 "data_size": 63488 00:13:35.558 }, 00:13:35.558 { 00:13:35.558 "name": "pt4", 00:13:35.558 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.558 "is_configured": true, 00:13:35.558 "data_offset": 2048, 00:13:35.558 "data_size": 63488 00:13:35.558 } 00:13:35.558 ] 00:13:35.558 } 00:13:35.558 } 00:13:35.558 }' 00:13:35.558 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.558 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:35.558 pt2 00:13:35.558 pt3 00:13:35.558 pt4' 00:13:35.558 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:35.819 [2024-11-15 10:42:06.344884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.819 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d1627da1-c30a-45d7-9ab7-b2c00a7e17f0 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d1627da1-c30a-45d7-9ab7-b2c00a7e17f0 ']' 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.079 [2024-11-15 10:42:06.392501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.079 [2024-11-15 10:42:06.392539] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.079 [2024-11-15 10:42:06.392647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.079 [2024-11-15 10:42:06.392738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.079 [2024-11-15 10:42:06.392760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.079 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.079 [2024-11-15 10:42:06.540580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:36.079 [2024-11-15 10:42:06.542900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:36.079 [2024-11-15 10:42:06.542983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:36.079 [2024-11-15 10:42:06.543038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:36.079 [2024-11-15 10:42:06.543110] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:36.079 [2024-11-15 10:42:06.543193] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:36.079 [2024-11-15 10:42:06.543225] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:36.079 [2024-11-15 10:42:06.543255] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:36.079 [2024-11-15 10:42:06.543276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.079 [2024-11-15 10:42:06.543292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:36.079 request: 00:13:36.079 { 00:13:36.079 "name": "raid_bdev1", 00:13:36.079 "raid_level": "concat", 00:13:36.079 "base_bdevs": [ 00:13:36.079 "malloc1", 00:13:36.079 "malloc2", 00:13:36.079 "malloc3", 00:13:36.079 "malloc4" 00:13:36.079 ], 00:13:36.079 "strip_size_kb": 64, 00:13:36.079 "superblock": false, 00:13:36.079 "method": "bdev_raid_create", 00:13:36.079 "req_id": 1 00:13:36.079 } 00:13:36.079 Got JSON-RPC error response 00:13:36.079 response: 00:13:36.079 { 00:13:36.080 "code": -17, 00:13:36.080 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:36.080 } 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.080 [2024-11-15 10:42:06.600533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:36.080 [2024-11-15 10:42:06.600604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.080 [2024-11-15 10:42:06.600632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:36.080 [2024-11-15 10:42:06.600650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.080 [2024-11-15 10:42:06.603311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.080 [2024-11-15 10:42:06.603419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:36.080 [2024-11-15 10:42:06.603529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:36.080 [2024-11-15 10:42:06.603609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:36.080 pt1 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.080 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.339 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.339 "name": "raid_bdev1", 00:13:36.339 "uuid": "d1627da1-c30a-45d7-9ab7-b2c00a7e17f0", 00:13:36.339 "strip_size_kb": 64, 00:13:36.339 "state": "configuring", 00:13:36.339 "raid_level": "concat", 00:13:36.339 "superblock": true, 00:13:36.339 "num_base_bdevs": 4, 00:13:36.339 "num_base_bdevs_discovered": 1, 00:13:36.339 "num_base_bdevs_operational": 4, 00:13:36.339 "base_bdevs_list": [ 00:13:36.339 { 00:13:36.339 "name": "pt1", 00:13:36.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.339 "is_configured": true, 00:13:36.339 "data_offset": 2048, 00:13:36.339 "data_size": 63488 00:13:36.339 }, 00:13:36.339 { 00:13:36.339 "name": null, 00:13:36.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.339 "is_configured": false, 00:13:36.339 "data_offset": 2048, 00:13:36.339 "data_size": 63488 00:13:36.339 }, 00:13:36.339 { 00:13:36.339 "name": null, 00:13:36.339 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.339 "is_configured": false, 00:13:36.339 "data_offset": 2048, 00:13:36.339 "data_size": 63488 00:13:36.339 }, 00:13:36.339 { 00:13:36.339 "name": null, 00:13:36.339 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.339 "is_configured": false, 00:13:36.339 "data_offset": 2048, 00:13:36.339 "data_size": 63488 00:13:36.339 } 00:13:36.339 ] 00:13:36.339 }' 00:13:36.339 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.339 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.909 [2024-11-15 10:42:07.176720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.909 [2024-11-15 10:42:07.176964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.909 [2024-11-15 10:42:07.177003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:36.909 [2024-11-15 10:42:07.177022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.909 [2024-11-15 10:42:07.177584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.909 [2024-11-15 10:42:07.177625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.909 [2024-11-15 10:42:07.177725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:36.909 [2024-11-15 10:42:07.177760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.909 pt2 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.909 [2024-11-15 10:42:07.184705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.909 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.909 "name": "raid_bdev1", 00:13:36.909 "uuid": "d1627da1-c30a-45d7-9ab7-b2c00a7e17f0", 00:13:36.909 "strip_size_kb": 64, 00:13:36.909 "state": "configuring", 00:13:36.909 "raid_level": "concat", 00:13:36.909 "superblock": true, 00:13:36.909 "num_base_bdevs": 4, 00:13:36.909 "num_base_bdevs_discovered": 1, 00:13:36.909 "num_base_bdevs_operational": 4, 00:13:36.909 "base_bdevs_list": [ 00:13:36.909 { 00:13:36.909 "name": "pt1", 00:13:36.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.909 "is_configured": true, 00:13:36.909 "data_offset": 2048, 00:13:36.910 "data_size": 63488 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "name": null, 00:13:36.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.910 "is_configured": false, 00:13:36.910 "data_offset": 0, 00:13:36.910 "data_size": 63488 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "name": null, 00:13:36.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.910 "is_configured": false, 00:13:36.910 "data_offset": 2048, 00:13:36.910 "data_size": 63488 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "name": null, 00:13:36.910 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.910 "is_configured": false, 00:13:36.910 "data_offset": 2048, 00:13:36.910 "data_size": 63488 00:13:36.910 } 00:13:36.910 ] 00:13:36.910 }' 00:13:36.910 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.910 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.482 [2024-11-15 10:42:07.736863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:37.482 [2024-11-15 10:42:07.736940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.482 [2024-11-15 10:42:07.736971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:37.482 [2024-11-15 10:42:07.736985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.482 [2024-11-15 10:42:07.737534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.482 [2024-11-15 10:42:07.737560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:37.482 [2024-11-15 10:42:07.737662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:37.482 [2024-11-15 10:42:07.737694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:37.482 pt2 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.482 [2024-11-15 10:42:07.744819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:37.482 [2024-11-15 10:42:07.744874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.482 [2024-11-15 10:42:07.744900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:37.482 [2024-11-15 10:42:07.744914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.482 [2024-11-15 10:42:07.745375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.482 [2024-11-15 10:42:07.745404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:37.482 [2024-11-15 10:42:07.745489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:37.482 [2024-11-15 10:42:07.745529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:37.482 pt3 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.482 [2024-11-15 10:42:07.752830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:37.482 [2024-11-15 10:42:07.752907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.482 [2024-11-15 10:42:07.752950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:37.482 [2024-11-15 10:42:07.752977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.482 [2024-11-15 10:42:07.753615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.482 [2024-11-15 10:42:07.753670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:37.482 [2024-11-15 10:42:07.753762] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:37.482 [2024-11-15 10:42:07.753791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:37.482 [2024-11-15 10:42:07.753967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:37.482 [2024-11-15 10:42:07.753982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:37.482 [2024-11-15 10:42:07.754280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:37.482 [2024-11-15 10:42:07.754510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:37.482 [2024-11-15 10:42:07.754533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:37.482 [2024-11-15 10:42:07.754690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.482 pt4 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.482 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.482 "name": "raid_bdev1", 00:13:37.482 "uuid": "d1627da1-c30a-45d7-9ab7-b2c00a7e17f0", 00:13:37.482 "strip_size_kb": 64, 00:13:37.482 "state": "online", 00:13:37.482 "raid_level": "concat", 00:13:37.482 "superblock": true, 00:13:37.482 "num_base_bdevs": 4, 00:13:37.482 "num_base_bdevs_discovered": 4, 00:13:37.482 "num_base_bdevs_operational": 4, 00:13:37.482 "base_bdevs_list": [ 00:13:37.482 { 00:13:37.482 "name": "pt1", 00:13:37.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.482 "is_configured": true, 00:13:37.483 "data_offset": 2048, 00:13:37.483 "data_size": 63488 00:13:37.483 }, 00:13:37.483 { 00:13:37.483 "name": "pt2", 00:13:37.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.483 "is_configured": true, 00:13:37.483 "data_offset": 2048, 00:13:37.483 "data_size": 63488 00:13:37.483 }, 00:13:37.483 { 00:13:37.483 "name": "pt3", 00:13:37.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.483 "is_configured": true, 00:13:37.483 "data_offset": 2048, 00:13:37.483 "data_size": 63488 00:13:37.483 }, 00:13:37.483 { 00:13:37.483 "name": "pt4", 00:13:37.483 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.483 "is_configured": true, 00:13:37.483 "data_offset": 2048, 00:13:37.483 "data_size": 63488 00:13:37.483 } 00:13:37.483 ] 00:13:37.483 }' 00:13:37.483 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.483 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.741 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:37.741 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:37.741 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.741 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.741 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.741 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.741 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.741 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.741 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.741 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.741 [2024-11-15 10:42:08.285406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:38.001 "name": "raid_bdev1", 00:13:38.001 "aliases": [ 00:13:38.001 "d1627da1-c30a-45d7-9ab7-b2c00a7e17f0" 00:13:38.001 ], 00:13:38.001 "product_name": "Raid Volume", 00:13:38.001 "block_size": 512, 00:13:38.001 "num_blocks": 253952, 00:13:38.001 "uuid": "d1627da1-c30a-45d7-9ab7-b2c00a7e17f0", 00:13:38.001 "assigned_rate_limits": { 00:13:38.001 "rw_ios_per_sec": 0, 00:13:38.001 "rw_mbytes_per_sec": 0, 00:13:38.001 "r_mbytes_per_sec": 0, 00:13:38.001 "w_mbytes_per_sec": 0 00:13:38.001 }, 00:13:38.001 "claimed": false, 00:13:38.001 "zoned": false, 00:13:38.001 "supported_io_types": { 00:13:38.001 "read": true, 00:13:38.001 "write": true, 00:13:38.001 "unmap": true, 00:13:38.001 "flush": true, 00:13:38.001 "reset": true, 00:13:38.001 "nvme_admin": false, 00:13:38.001 "nvme_io": false, 00:13:38.001 "nvme_io_md": false, 00:13:38.001 "write_zeroes": true, 00:13:38.001 "zcopy": false, 00:13:38.001 "get_zone_info": false, 00:13:38.001 "zone_management": false, 00:13:38.001 "zone_append": false, 00:13:38.001 "compare": false, 00:13:38.001 "compare_and_write": false, 00:13:38.001 "abort": false, 00:13:38.001 "seek_hole": false, 00:13:38.001 "seek_data": false, 00:13:38.001 "copy": false, 00:13:38.001 "nvme_iov_md": false 00:13:38.001 }, 00:13:38.001 "memory_domains": [ 00:13:38.001 { 00:13:38.001 "dma_device_id": "system", 00:13:38.001 "dma_device_type": 1 00:13:38.001 }, 00:13:38.001 { 00:13:38.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.001 "dma_device_type": 2 00:13:38.001 }, 00:13:38.001 { 00:13:38.001 "dma_device_id": "system", 00:13:38.001 "dma_device_type": 1 00:13:38.001 }, 00:13:38.001 { 00:13:38.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.001 "dma_device_type": 2 00:13:38.001 }, 00:13:38.001 { 00:13:38.001 "dma_device_id": "system", 00:13:38.001 "dma_device_type": 1 00:13:38.001 }, 00:13:38.001 { 00:13:38.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.001 "dma_device_type": 2 00:13:38.001 }, 00:13:38.001 { 00:13:38.001 "dma_device_id": "system", 00:13:38.001 "dma_device_type": 1 00:13:38.001 }, 00:13:38.001 { 00:13:38.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.001 "dma_device_type": 2 00:13:38.001 } 00:13:38.001 ], 00:13:38.001 "driver_specific": { 00:13:38.001 "raid": { 00:13:38.001 "uuid": "d1627da1-c30a-45d7-9ab7-b2c00a7e17f0", 00:13:38.001 "strip_size_kb": 64, 00:13:38.001 "state": "online", 00:13:38.001 "raid_level": "concat", 00:13:38.001 "superblock": true, 00:13:38.001 "num_base_bdevs": 4, 00:13:38.001 "num_base_bdevs_discovered": 4, 00:13:38.001 "num_base_bdevs_operational": 4, 00:13:38.001 "base_bdevs_list": [ 00:13:38.001 { 00:13:38.001 "name": "pt1", 00:13:38.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.001 "is_configured": true, 00:13:38.001 "data_offset": 2048, 00:13:38.001 "data_size": 63488 00:13:38.001 }, 00:13:38.001 { 00:13:38.001 "name": "pt2", 00:13:38.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.001 "is_configured": true, 00:13:38.001 "data_offset": 2048, 00:13:38.001 "data_size": 63488 00:13:38.001 }, 00:13:38.001 { 00:13:38.001 "name": "pt3", 00:13:38.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.001 "is_configured": true, 00:13:38.001 "data_offset": 2048, 00:13:38.001 "data_size": 63488 00:13:38.001 }, 00:13:38.001 { 00:13:38.001 "name": "pt4", 00:13:38.001 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:38.001 "is_configured": true, 00:13:38.001 "data_offset": 2048, 00:13:38.001 "data_size": 63488 00:13:38.001 } 00:13:38.001 ] 00:13:38.001 } 00:13:38.001 } 00:13:38.001 }' 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:38.001 pt2 00:13:38.001 pt3 00:13:38.001 pt4' 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.001 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.261 [2024-11-15 10:42:08.669438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d1627da1-c30a-45d7-9ab7-b2c00a7e17f0 '!=' d1627da1-c30a-45d7-9ab7-b2c00a7e17f0 ']' 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72935 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72935 ']' 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72935 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72935 00:13:38.261 killing process with pid 72935 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72935' 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72935 00:13:38.261 [2024-11-15 10:42:08.745943] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.261 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72935 00:13:38.261 [2024-11-15 10:42:08.746043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.261 [2024-11-15 10:42:08.746143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.261 [2024-11-15 10:42:08.746158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:38.830 [2024-11-15 10:42:09.078867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.767 10:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:39.767 00:13:39.767 real 0m5.845s 00:13:39.767 user 0m8.950s 00:13:39.767 sys 0m0.779s 00:13:39.767 ************************************ 00:13:39.767 END TEST raid_superblock_test 00:13:39.767 ************************************ 00:13:39.767 10:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:39.767 10:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.767 10:42:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:39.767 10:42:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:39.767 10:42:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:39.767 10:42:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.767 ************************************ 00:13:39.767 START TEST raid_read_error_test 00:13:39.767 ************************************ 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:39.767 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.A1watTXu1W 00:13:39.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73205 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73205 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73205 ']' 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:39.768 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.768 [2024-11-15 10:42:10.222868] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:13:39.768 [2024-11-15 10:42:10.223061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73205 ] 00:13:40.026 [2024-11-15 10:42:10.409522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.026 [2024-11-15 10:42:10.537123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.284 [2024-11-15 10:42:10.731663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.284 [2024-11-15 10:42:10.731738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.852 BaseBdev1_malloc 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.852 true 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.852 [2024-11-15 10:42:11.324685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:40.852 [2024-11-15 10:42:11.324756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.852 [2024-11-15 10:42:11.324787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:40.852 [2024-11-15 10:42:11.324805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.852 [2024-11-15 10:42:11.327476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.852 [2024-11-15 10:42:11.327672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:40.852 BaseBdev1 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.852 BaseBdev2_malloc 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.852 true 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.852 [2024-11-15 10:42:11.377016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:40.852 [2024-11-15 10:42:11.377221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.852 [2024-11-15 10:42:11.377257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:40.852 [2024-11-15 10:42:11.377276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.852 [2024-11-15 10:42:11.379974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.852 [2024-11-15 10:42:11.380030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:40.852 BaseBdev2 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.852 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.111 BaseBdev3_malloc 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.111 true 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.111 [2024-11-15 10:42:11.440231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:41.111 [2024-11-15 10:42:11.440304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.111 [2024-11-15 10:42:11.440334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:41.111 [2024-11-15 10:42:11.440371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.111 [2024-11-15 10:42:11.443078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.111 [2024-11-15 10:42:11.443265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:41.111 BaseBdev3 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.111 BaseBdev4_malloc 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.111 true 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.111 [2024-11-15 10:42:11.491975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:41.111 [2024-11-15 10:42:11.492041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.111 [2024-11-15 10:42:11.492068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:41.111 [2024-11-15 10:42:11.492085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.111 [2024-11-15 10:42:11.494730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.111 [2024-11-15 10:42:11.494784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:41.111 BaseBdev4 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.111 [2024-11-15 10:42:11.500058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.111 [2024-11-15 10:42:11.502311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.111 [2024-11-15 10:42:11.502588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.111 [2024-11-15 10:42:11.502707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.111 [2024-11-15 10:42:11.503050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:41.111 [2024-11-15 10:42:11.503084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:41.111 [2024-11-15 10:42:11.503418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:41.111 [2024-11-15 10:42:11.503643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:41.111 [2024-11-15 10:42:11.503662] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:41.111 [2024-11-15 10:42:11.503857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.111 "name": "raid_bdev1", 00:13:41.111 "uuid": "151c216e-e3bb-4902-a1f2-7f041179bafc", 00:13:41.111 "strip_size_kb": 64, 00:13:41.111 "state": "online", 00:13:41.111 "raid_level": "concat", 00:13:41.111 "superblock": true, 00:13:41.111 "num_base_bdevs": 4, 00:13:41.111 "num_base_bdevs_discovered": 4, 00:13:41.111 "num_base_bdevs_operational": 4, 00:13:41.111 "base_bdevs_list": [ 00:13:41.111 { 00:13:41.111 "name": "BaseBdev1", 00:13:41.111 "uuid": "04fe97a6-f7f9-5785-97c5-21702c5c27f9", 00:13:41.111 "is_configured": true, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 }, 00:13:41.111 { 00:13:41.111 "name": "BaseBdev2", 00:13:41.111 "uuid": "7dc7131e-70d1-5f50-b9f9-3aa24ef3f2c3", 00:13:41.111 "is_configured": true, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 }, 00:13:41.111 { 00:13:41.111 "name": "BaseBdev3", 00:13:41.111 "uuid": "e69d9f27-35cd-59ca-a712-408162985f0e", 00:13:41.111 "is_configured": true, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 }, 00:13:41.111 { 00:13:41.111 "name": "BaseBdev4", 00:13:41.111 "uuid": "87c418e1-9c72-5803-9c1f-e2a665226067", 00:13:41.111 "is_configured": true, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 } 00:13:41.111 ] 00:13:41.111 }' 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.111 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.678 10:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:41.679 10:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:41.679 [2024-11-15 10:42:12.133588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.616 "name": "raid_bdev1", 00:13:42.616 "uuid": "151c216e-e3bb-4902-a1f2-7f041179bafc", 00:13:42.616 "strip_size_kb": 64, 00:13:42.616 "state": "online", 00:13:42.616 "raid_level": "concat", 00:13:42.616 "superblock": true, 00:13:42.616 "num_base_bdevs": 4, 00:13:42.616 "num_base_bdevs_discovered": 4, 00:13:42.616 "num_base_bdevs_operational": 4, 00:13:42.616 "base_bdevs_list": [ 00:13:42.616 { 00:13:42.616 "name": "BaseBdev1", 00:13:42.616 "uuid": "04fe97a6-f7f9-5785-97c5-21702c5c27f9", 00:13:42.616 "is_configured": true, 00:13:42.616 "data_offset": 2048, 00:13:42.616 "data_size": 63488 00:13:42.616 }, 00:13:42.616 { 00:13:42.616 "name": "BaseBdev2", 00:13:42.616 "uuid": "7dc7131e-70d1-5f50-b9f9-3aa24ef3f2c3", 00:13:42.616 "is_configured": true, 00:13:42.616 "data_offset": 2048, 00:13:42.616 "data_size": 63488 00:13:42.616 }, 00:13:42.616 { 00:13:42.616 "name": "BaseBdev3", 00:13:42.616 "uuid": "e69d9f27-35cd-59ca-a712-408162985f0e", 00:13:42.616 "is_configured": true, 00:13:42.616 "data_offset": 2048, 00:13:42.616 "data_size": 63488 00:13:42.616 }, 00:13:42.616 { 00:13:42.616 "name": "BaseBdev4", 00:13:42.616 "uuid": "87c418e1-9c72-5803-9c1f-e2a665226067", 00:13:42.616 "is_configured": true, 00:13:42.616 "data_offset": 2048, 00:13:42.616 "data_size": 63488 00:13:42.616 } 00:13:42.616 ] 00:13:42.616 }' 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.616 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.186 [2024-11-15 10:42:13.577558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.186 [2024-11-15 10:42:13.577740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.186 [2024-11-15 10:42:13.581364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.186 [2024-11-15 10:42:13.581576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.186 [2024-11-15 10:42:13.581685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.186 [2024-11-15 10:42:13.581908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:13:43.186 "results": [ 00:13:43.186 { 00:13:43.186 "job": "raid_bdev1", 00:13:43.186 "core_mask": "0x1", 00:13:43.186 "workload": "randrw", 00:13:43.186 "percentage": 50, 00:13:43.186 "status": "finished", 00:13:43.186 "queue_depth": 1, 00:13:43.186 "io_size": 131072, 00:13:43.186 "runtime": 1.44173, 00:13:43.186 "iops": 10743.343067009773, 00:13:43.186 "mibps": 1342.9178833762217, 00:13:43.186 "io_failed": 1, 00:13:43.186 "io_timeout": 0, 00:13:43.186 "avg_latency_us": 127.94220130289337, 00:13:43.186 "min_latency_us": 44.68363636363637, 00:13:43.186 "max_latency_us": 1876.7127272727273 00:13:43.186 } 00:13:43.186 ], 00:13:43.186 "core_count": 1 00:13:43.186 } 00:13:43.186 te offline 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73205 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73205 ']' 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73205 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73205 00:13:43.186 killing process with pid 73205 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73205' 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73205 00:13:43.186 [2024-11-15 10:42:13.612396] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.186 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73205 00:13:43.446 [2024-11-15 10:42:13.885725] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.382 10:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:44.382 10:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.A1watTXu1W 00:13:44.382 10:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:44.382 10:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:13:44.382 10:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:44.382 ************************************ 00:13:44.382 END TEST raid_read_error_test 00:13:44.382 ************************************ 00:13:44.382 10:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:44.382 10:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:44.382 10:42:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:13:44.382 00:13:44.382 real 0m4.821s 00:13:44.382 user 0m6.119s 00:13:44.382 sys 0m0.490s 00:13:44.382 10:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.382 10:42:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.640 10:42:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:44.640 10:42:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:44.640 10:42:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:44.640 10:42:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.640 ************************************ 00:13:44.641 START TEST raid_write_error_test 00:13:44.641 ************************************ 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ISy7DRNURR 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73352 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73352 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73352 ']' 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:44.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:44.641 10:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.641 [2024-11-15 10:42:15.116523] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:13:44.641 [2024-11-15 10:42:15.116897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73352 ] 00:13:44.900 [2024-11-15 10:42:15.307059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.159 [2024-11-15 10:42:15.459513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.159 [2024-11-15 10:42:15.656427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.159 [2024-11-15 10:42:15.656477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 BaseBdev1_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 true 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 [2024-11-15 10:42:16.082446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:45.727 [2024-11-15 10:42:16.082512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.727 [2024-11-15 10:42:16.082540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:45.727 [2024-11-15 10:42:16.082558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.727 [2024-11-15 10:42:16.085161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.727 [2024-11-15 10:42:16.085208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:45.727 BaseBdev1 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 BaseBdev2_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 true 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 [2024-11-15 10:42:16.137747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:45.727 [2024-11-15 10:42:16.137811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.727 [2024-11-15 10:42:16.137835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:45.727 [2024-11-15 10:42:16.137852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.727 [2024-11-15 10:42:16.140455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.727 [2024-11-15 10:42:16.140505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:45.727 BaseBdev2 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 BaseBdev3_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 true 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 [2024-11-15 10:42:16.210271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:45.727 [2024-11-15 10:42:16.210361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.727 [2024-11-15 10:42:16.210395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:45.727 [2024-11-15 10:42:16.210417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.727 [2024-11-15 10:42:16.213650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.727 [2024-11-15 10:42:16.213707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:45.727 BaseBdev3 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 BaseBdev4_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 true 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 [2024-11-15 10:42:16.269702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:45.727 [2024-11-15 10:42:16.269777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.727 [2024-11-15 10:42:16.269809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:45.727 [2024-11-15 10:42:16.269830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.727 [2024-11-15 10:42:16.273073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.727 [2024-11-15 10:42:16.273135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:45.727 BaseBdev4 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.727 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.727 [2024-11-15 10:42:16.277945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.727 [2024-11-15 10:42:16.280725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.727 [2024-11-15 10:42:16.280861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.728 [2024-11-15 10:42:16.280981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:45.728 [2024-11-15 10:42:16.281371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:45.728 [2024-11-15 10:42:16.281411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:45.728 [2024-11-15 10:42:16.281787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:45.728 [2024-11-15 10:42:16.282061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:45.728 [2024-11-15 10:42:16.282101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:45.728 [2024-11-15 10:42:16.282421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.987 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.987 "name": "raid_bdev1", 00:13:45.987 "uuid": "54388172-c70b-48db-a48c-ea0a5f8a189f", 00:13:45.987 "strip_size_kb": 64, 00:13:45.987 "state": "online", 00:13:45.987 "raid_level": "concat", 00:13:45.987 "superblock": true, 00:13:45.987 "num_base_bdevs": 4, 00:13:45.987 "num_base_bdevs_discovered": 4, 00:13:45.987 "num_base_bdevs_operational": 4, 00:13:45.987 "base_bdevs_list": [ 00:13:45.987 { 00:13:45.987 "name": "BaseBdev1", 00:13:45.987 "uuid": "49759024-4866-5e75-be37-d8432b8b0103", 00:13:45.987 "is_configured": true, 00:13:45.987 "data_offset": 2048, 00:13:45.987 "data_size": 63488 00:13:45.987 }, 00:13:45.987 { 00:13:45.987 "name": "BaseBdev2", 00:13:45.987 "uuid": "9d89a585-67dc-505b-8053-91f0ca185daf", 00:13:45.988 "is_configured": true, 00:13:45.988 "data_offset": 2048, 00:13:45.988 "data_size": 63488 00:13:45.988 }, 00:13:45.988 { 00:13:45.988 "name": "BaseBdev3", 00:13:45.988 "uuid": "f3543e86-00be-52ed-bff2-a5c0d1fa2902", 00:13:45.988 "is_configured": true, 00:13:45.988 "data_offset": 2048, 00:13:45.988 "data_size": 63488 00:13:45.988 }, 00:13:45.988 { 00:13:45.988 "name": "BaseBdev4", 00:13:45.988 "uuid": "67504583-ded4-574f-afdf-2b527ecbf304", 00:13:45.988 "is_configured": true, 00:13:45.988 "data_offset": 2048, 00:13:45.988 "data_size": 63488 00:13:45.988 } 00:13:45.988 ] 00:13:45.988 }' 00:13:45.988 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.988 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.246 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:46.246 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:46.505 [2024-11-15 10:42:16.919847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.443 "name": "raid_bdev1", 00:13:47.443 "uuid": "54388172-c70b-48db-a48c-ea0a5f8a189f", 00:13:47.443 "strip_size_kb": 64, 00:13:47.443 "state": "online", 00:13:47.443 "raid_level": "concat", 00:13:47.443 "superblock": true, 00:13:47.443 "num_base_bdevs": 4, 00:13:47.443 "num_base_bdevs_discovered": 4, 00:13:47.443 "num_base_bdevs_operational": 4, 00:13:47.443 "base_bdevs_list": [ 00:13:47.443 { 00:13:47.443 "name": "BaseBdev1", 00:13:47.443 "uuid": "49759024-4866-5e75-be37-d8432b8b0103", 00:13:47.443 "is_configured": true, 00:13:47.443 "data_offset": 2048, 00:13:47.443 "data_size": 63488 00:13:47.443 }, 00:13:47.443 { 00:13:47.443 "name": "BaseBdev2", 00:13:47.443 "uuid": "9d89a585-67dc-505b-8053-91f0ca185daf", 00:13:47.443 "is_configured": true, 00:13:47.443 "data_offset": 2048, 00:13:47.443 "data_size": 63488 00:13:47.443 }, 00:13:47.443 { 00:13:47.443 "name": "BaseBdev3", 00:13:47.443 "uuid": "f3543e86-00be-52ed-bff2-a5c0d1fa2902", 00:13:47.443 "is_configured": true, 00:13:47.443 "data_offset": 2048, 00:13:47.443 "data_size": 63488 00:13:47.443 }, 00:13:47.443 { 00:13:47.443 "name": "BaseBdev4", 00:13:47.443 "uuid": "67504583-ded4-574f-afdf-2b527ecbf304", 00:13:47.443 "is_configured": true, 00:13:47.443 "data_offset": 2048, 00:13:47.443 "data_size": 63488 00:13:47.443 } 00:13:47.443 ] 00:13:47.443 }' 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.443 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.011 [2024-11-15 10:42:18.335121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:48.011 [2024-11-15 10:42:18.335165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.011 [2024-11-15 10:42:18.338750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.011 [2024-11-15 10:42:18.338847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.011 [2024-11-15 10:42:18.338908] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.011 [2024-11-15 10:42:18.338927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:48.011 { 00:13:48.011 "results": [ 00:13:48.011 { 00:13:48.011 "job": "raid_bdev1", 00:13:48.011 "core_mask": "0x1", 00:13:48.011 "workload": "randrw", 00:13:48.011 "percentage": 50, 00:13:48.011 "status": "finished", 00:13:48.011 "queue_depth": 1, 00:13:48.011 "io_size": 131072, 00:13:48.011 "runtime": 1.41306, 00:13:48.011 "iops": 10412.155180954807, 00:13:48.011 "mibps": 1301.519397619351, 00:13:48.011 "io_failed": 1, 00:13:48.011 "io_timeout": 0, 00:13:48.011 "avg_latency_us": 131.63719796853954, 00:13:48.011 "min_latency_us": 39.56363636363636, 00:13:48.011 "max_latency_us": 1884.16 00:13:48.011 } 00:13:48.011 ], 00:13:48.011 "core_count": 1 00:13:48.011 } 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73352 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73352 ']' 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73352 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73352 00:13:48.011 killing process with pid 73352 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73352' 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73352 00:13:48.011 [2024-11-15 10:42:18.369447] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.011 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73352 00:13:48.270 [2024-11-15 10:42:18.646501] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.300 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ISy7DRNURR 00:13:49.300 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:49.300 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:49.300 ************************************ 00:13:49.300 END TEST raid_write_error_test 00:13:49.300 ************************************ 00:13:49.300 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:49.300 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:49.300 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:49.300 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:49.300 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:49.300 00:13:49.300 real 0m4.724s 00:13:49.300 user 0m5.887s 00:13:49.300 sys 0m0.498s 00:13:49.300 10:42:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:49.300 10:42:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.300 10:42:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:49.300 10:42:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:49.301 10:42:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:49.301 10:42:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:49.301 10:42:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.301 ************************************ 00:13:49.301 START TEST raid_state_function_test 00:13:49.301 ************************************ 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:49.301 Process raid pid: 73490 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73490 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73490' 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73490 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73490 ']' 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:49.301 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.559 [2024-11-15 10:42:19.875177] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:13:49.559 [2024-11-15 10:42:19.875609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.559 [2024-11-15 10:42:20.064176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.817 [2024-11-15 10:42:20.192937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.075 [2024-11-15 10:42:20.395936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.075 [2024-11-15 10:42:20.396222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.640 [2024-11-15 10:42:20.897246] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.640 [2024-11-15 10:42:20.897518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.640 [2024-11-15 10:42:20.897550] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.640 [2024-11-15 10:42:20.897570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.640 [2024-11-15 10:42:20.897581] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.640 [2024-11-15 10:42:20.897595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.640 [2024-11-15 10:42:20.897605] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:50.640 [2024-11-15 10:42:20.897619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.640 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.640 "name": "Existed_Raid", 00:13:50.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.640 "strip_size_kb": 0, 00:13:50.640 "state": "configuring", 00:13:50.640 "raid_level": "raid1", 00:13:50.640 "superblock": false, 00:13:50.640 "num_base_bdevs": 4, 00:13:50.640 "num_base_bdevs_discovered": 0, 00:13:50.640 "num_base_bdevs_operational": 4, 00:13:50.640 "base_bdevs_list": [ 00:13:50.640 { 00:13:50.640 "name": "BaseBdev1", 00:13:50.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.640 "is_configured": false, 00:13:50.640 "data_offset": 0, 00:13:50.640 "data_size": 0 00:13:50.640 }, 00:13:50.640 { 00:13:50.640 "name": "BaseBdev2", 00:13:50.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.640 "is_configured": false, 00:13:50.640 "data_offset": 0, 00:13:50.640 "data_size": 0 00:13:50.640 }, 00:13:50.640 { 00:13:50.640 "name": "BaseBdev3", 00:13:50.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.640 "is_configured": false, 00:13:50.640 "data_offset": 0, 00:13:50.640 "data_size": 0 00:13:50.640 }, 00:13:50.640 { 00:13:50.640 "name": "BaseBdev4", 00:13:50.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.641 "is_configured": false, 00:13:50.641 "data_offset": 0, 00:13:50.641 "data_size": 0 00:13:50.641 } 00:13:50.641 ] 00:13:50.641 }' 00:13:50.641 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.641 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.899 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.899 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.899 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.899 [2024-11-15 10:42:21.405382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.899 [2024-11-15 10:42:21.405429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:50.899 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 [2024-11-15 10:42:21.413328] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.900 [2024-11-15 10:42:21.413405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.900 [2024-11-15 10:42:21.413423] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.900 [2024-11-15 10:42:21.413439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.900 [2024-11-15 10:42:21.413449] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.900 [2024-11-15 10:42:21.413463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.900 [2024-11-15 10:42:21.413473] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:50.900 [2024-11-15 10:42:21.413487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 [2024-11-15 10:42:21.453866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.900 BaseBdev1 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.900 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.159 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.159 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:51.159 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.159 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.159 [ 00:13:51.159 { 00:13:51.159 "name": "BaseBdev1", 00:13:51.159 "aliases": [ 00:13:51.159 "09dc0ab7-675d-4133-bbfd-a2c9d72624ca" 00:13:51.159 ], 00:13:51.159 "product_name": "Malloc disk", 00:13:51.159 "block_size": 512, 00:13:51.159 "num_blocks": 65536, 00:13:51.159 "uuid": "09dc0ab7-675d-4133-bbfd-a2c9d72624ca", 00:13:51.159 "assigned_rate_limits": { 00:13:51.159 "rw_ios_per_sec": 0, 00:13:51.159 "rw_mbytes_per_sec": 0, 00:13:51.159 "r_mbytes_per_sec": 0, 00:13:51.159 "w_mbytes_per_sec": 0 00:13:51.159 }, 00:13:51.159 "claimed": true, 00:13:51.159 "claim_type": "exclusive_write", 00:13:51.159 "zoned": false, 00:13:51.159 "supported_io_types": { 00:13:51.159 "read": true, 00:13:51.159 "write": true, 00:13:51.159 "unmap": true, 00:13:51.159 "flush": true, 00:13:51.159 "reset": true, 00:13:51.159 "nvme_admin": false, 00:13:51.159 "nvme_io": false, 00:13:51.159 "nvme_io_md": false, 00:13:51.159 "write_zeroes": true, 00:13:51.159 "zcopy": true, 00:13:51.159 "get_zone_info": false, 00:13:51.159 "zone_management": false, 00:13:51.159 "zone_append": false, 00:13:51.159 "compare": false, 00:13:51.159 "compare_and_write": false, 00:13:51.159 "abort": true, 00:13:51.159 "seek_hole": false, 00:13:51.159 "seek_data": false, 00:13:51.159 "copy": true, 00:13:51.159 "nvme_iov_md": false 00:13:51.159 }, 00:13:51.159 "memory_domains": [ 00:13:51.159 { 00:13:51.159 "dma_device_id": "system", 00:13:51.159 "dma_device_type": 1 00:13:51.159 }, 00:13:51.159 { 00:13:51.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.159 "dma_device_type": 2 00:13:51.159 } 00:13:51.159 ], 00:13:51.159 "driver_specific": {} 00:13:51.159 } 00:13:51.160 ] 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.160 "name": "Existed_Raid", 00:13:51.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.160 "strip_size_kb": 0, 00:13:51.160 "state": "configuring", 00:13:51.160 "raid_level": "raid1", 00:13:51.160 "superblock": false, 00:13:51.160 "num_base_bdevs": 4, 00:13:51.160 "num_base_bdevs_discovered": 1, 00:13:51.160 "num_base_bdevs_operational": 4, 00:13:51.160 "base_bdevs_list": [ 00:13:51.160 { 00:13:51.160 "name": "BaseBdev1", 00:13:51.160 "uuid": "09dc0ab7-675d-4133-bbfd-a2c9d72624ca", 00:13:51.160 "is_configured": true, 00:13:51.160 "data_offset": 0, 00:13:51.160 "data_size": 65536 00:13:51.160 }, 00:13:51.160 { 00:13:51.160 "name": "BaseBdev2", 00:13:51.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.160 "is_configured": false, 00:13:51.160 "data_offset": 0, 00:13:51.160 "data_size": 0 00:13:51.160 }, 00:13:51.160 { 00:13:51.160 "name": "BaseBdev3", 00:13:51.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.160 "is_configured": false, 00:13:51.160 "data_offset": 0, 00:13:51.160 "data_size": 0 00:13:51.160 }, 00:13:51.160 { 00:13:51.160 "name": "BaseBdev4", 00:13:51.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.160 "is_configured": false, 00:13:51.160 "data_offset": 0, 00:13:51.160 "data_size": 0 00:13:51.160 } 00:13:51.160 ] 00:13:51.160 }' 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.160 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.726 [2024-11-15 10:42:22.034076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.726 [2024-11-15 10:42:22.034137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.726 [2024-11-15 10:42:22.042109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.726 [2024-11-15 10:42:22.044648] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.726 [2024-11-15 10:42:22.044707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.726 [2024-11-15 10:42:22.044726] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.726 [2024-11-15 10:42:22.044760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.726 [2024-11-15 10:42:22.044771] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:51.726 [2024-11-15 10:42:22.044784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.726 "name": "Existed_Raid", 00:13:51.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.726 "strip_size_kb": 0, 00:13:51.726 "state": "configuring", 00:13:51.726 "raid_level": "raid1", 00:13:51.726 "superblock": false, 00:13:51.726 "num_base_bdevs": 4, 00:13:51.726 "num_base_bdevs_discovered": 1, 00:13:51.726 "num_base_bdevs_operational": 4, 00:13:51.726 "base_bdevs_list": [ 00:13:51.726 { 00:13:51.726 "name": "BaseBdev1", 00:13:51.726 "uuid": "09dc0ab7-675d-4133-bbfd-a2c9d72624ca", 00:13:51.726 "is_configured": true, 00:13:51.726 "data_offset": 0, 00:13:51.726 "data_size": 65536 00:13:51.726 }, 00:13:51.726 { 00:13:51.726 "name": "BaseBdev2", 00:13:51.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.726 "is_configured": false, 00:13:51.726 "data_offset": 0, 00:13:51.726 "data_size": 0 00:13:51.726 }, 00:13:51.726 { 00:13:51.726 "name": "BaseBdev3", 00:13:51.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.726 "is_configured": false, 00:13:51.726 "data_offset": 0, 00:13:51.726 "data_size": 0 00:13:51.726 }, 00:13:51.726 { 00:13:51.726 "name": "BaseBdev4", 00:13:51.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.726 "is_configured": false, 00:13:51.726 "data_offset": 0, 00:13:51.726 "data_size": 0 00:13:51.726 } 00:13:51.726 ] 00:13:51.726 }' 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.726 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.294 [2024-11-15 10:42:22.584993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.294 BaseBdev2 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.294 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.294 [ 00:13:52.294 { 00:13:52.294 "name": "BaseBdev2", 00:13:52.294 "aliases": [ 00:13:52.294 "7b877e78-5079-413c-b2fc-681f886e48fc" 00:13:52.294 ], 00:13:52.294 "product_name": "Malloc disk", 00:13:52.294 "block_size": 512, 00:13:52.294 "num_blocks": 65536, 00:13:52.294 "uuid": "7b877e78-5079-413c-b2fc-681f886e48fc", 00:13:52.294 "assigned_rate_limits": { 00:13:52.294 "rw_ios_per_sec": 0, 00:13:52.294 "rw_mbytes_per_sec": 0, 00:13:52.294 "r_mbytes_per_sec": 0, 00:13:52.294 "w_mbytes_per_sec": 0 00:13:52.294 }, 00:13:52.294 "claimed": true, 00:13:52.294 "claim_type": "exclusive_write", 00:13:52.294 "zoned": false, 00:13:52.294 "supported_io_types": { 00:13:52.294 "read": true, 00:13:52.294 "write": true, 00:13:52.294 "unmap": true, 00:13:52.294 "flush": true, 00:13:52.294 "reset": true, 00:13:52.294 "nvme_admin": false, 00:13:52.294 "nvme_io": false, 00:13:52.294 "nvme_io_md": false, 00:13:52.294 "write_zeroes": true, 00:13:52.294 "zcopy": true, 00:13:52.294 "get_zone_info": false, 00:13:52.294 "zone_management": false, 00:13:52.294 "zone_append": false, 00:13:52.294 "compare": false, 00:13:52.294 "compare_and_write": false, 00:13:52.294 "abort": true, 00:13:52.294 "seek_hole": false, 00:13:52.294 "seek_data": false, 00:13:52.294 "copy": true, 00:13:52.294 "nvme_iov_md": false 00:13:52.294 }, 00:13:52.294 "memory_domains": [ 00:13:52.294 { 00:13:52.294 "dma_device_id": "system", 00:13:52.294 "dma_device_type": 1 00:13:52.294 }, 00:13:52.294 { 00:13:52.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.295 "dma_device_type": 2 00:13:52.295 } 00:13:52.295 ], 00:13:52.295 "driver_specific": {} 00:13:52.295 } 00:13:52.295 ] 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.295 "name": "Existed_Raid", 00:13:52.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.295 "strip_size_kb": 0, 00:13:52.295 "state": "configuring", 00:13:52.295 "raid_level": "raid1", 00:13:52.295 "superblock": false, 00:13:52.295 "num_base_bdevs": 4, 00:13:52.295 "num_base_bdevs_discovered": 2, 00:13:52.295 "num_base_bdevs_operational": 4, 00:13:52.295 "base_bdevs_list": [ 00:13:52.295 { 00:13:52.295 "name": "BaseBdev1", 00:13:52.295 "uuid": "09dc0ab7-675d-4133-bbfd-a2c9d72624ca", 00:13:52.295 "is_configured": true, 00:13:52.295 "data_offset": 0, 00:13:52.295 "data_size": 65536 00:13:52.295 }, 00:13:52.295 { 00:13:52.295 "name": "BaseBdev2", 00:13:52.295 "uuid": "7b877e78-5079-413c-b2fc-681f886e48fc", 00:13:52.295 "is_configured": true, 00:13:52.295 "data_offset": 0, 00:13:52.295 "data_size": 65536 00:13:52.295 }, 00:13:52.295 { 00:13:52.295 "name": "BaseBdev3", 00:13:52.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.295 "is_configured": false, 00:13:52.295 "data_offset": 0, 00:13:52.295 "data_size": 0 00:13:52.295 }, 00:13:52.295 { 00:13:52.295 "name": "BaseBdev4", 00:13:52.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.295 "is_configured": false, 00:13:52.295 "data_offset": 0, 00:13:52.295 "data_size": 0 00:13:52.295 } 00:13:52.295 ] 00:13:52.295 }' 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.295 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.863 [2024-11-15 10:42:23.182533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.863 BaseBdev3 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.863 [ 00:13:52.863 { 00:13:52.863 "name": "BaseBdev3", 00:13:52.863 "aliases": [ 00:13:52.863 "e2a6983f-3da2-4030-a06a-9980f135262c" 00:13:52.863 ], 00:13:52.863 "product_name": "Malloc disk", 00:13:52.863 "block_size": 512, 00:13:52.863 "num_blocks": 65536, 00:13:52.863 "uuid": "e2a6983f-3da2-4030-a06a-9980f135262c", 00:13:52.863 "assigned_rate_limits": { 00:13:52.863 "rw_ios_per_sec": 0, 00:13:52.863 "rw_mbytes_per_sec": 0, 00:13:52.863 "r_mbytes_per_sec": 0, 00:13:52.863 "w_mbytes_per_sec": 0 00:13:52.863 }, 00:13:52.863 "claimed": true, 00:13:52.863 "claim_type": "exclusive_write", 00:13:52.863 "zoned": false, 00:13:52.863 "supported_io_types": { 00:13:52.863 "read": true, 00:13:52.863 "write": true, 00:13:52.863 "unmap": true, 00:13:52.863 "flush": true, 00:13:52.863 "reset": true, 00:13:52.863 "nvme_admin": false, 00:13:52.863 "nvme_io": false, 00:13:52.863 "nvme_io_md": false, 00:13:52.863 "write_zeroes": true, 00:13:52.863 "zcopy": true, 00:13:52.863 "get_zone_info": false, 00:13:52.863 "zone_management": false, 00:13:52.863 "zone_append": false, 00:13:52.863 "compare": false, 00:13:52.863 "compare_and_write": false, 00:13:52.863 "abort": true, 00:13:52.863 "seek_hole": false, 00:13:52.863 "seek_data": false, 00:13:52.863 "copy": true, 00:13:52.863 "nvme_iov_md": false 00:13:52.863 }, 00:13:52.863 "memory_domains": [ 00:13:52.863 { 00:13:52.863 "dma_device_id": "system", 00:13:52.863 "dma_device_type": 1 00:13:52.863 }, 00:13:52.863 { 00:13:52.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.863 "dma_device_type": 2 00:13:52.863 } 00:13:52.863 ], 00:13:52.863 "driver_specific": {} 00:13:52.863 } 00:13:52.863 ] 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.863 "name": "Existed_Raid", 00:13:52.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.863 "strip_size_kb": 0, 00:13:52.863 "state": "configuring", 00:13:52.863 "raid_level": "raid1", 00:13:52.863 "superblock": false, 00:13:52.863 "num_base_bdevs": 4, 00:13:52.863 "num_base_bdevs_discovered": 3, 00:13:52.863 "num_base_bdevs_operational": 4, 00:13:52.863 "base_bdevs_list": [ 00:13:52.863 { 00:13:52.863 "name": "BaseBdev1", 00:13:52.863 "uuid": "09dc0ab7-675d-4133-bbfd-a2c9d72624ca", 00:13:52.863 "is_configured": true, 00:13:52.863 "data_offset": 0, 00:13:52.863 "data_size": 65536 00:13:52.863 }, 00:13:52.863 { 00:13:52.863 "name": "BaseBdev2", 00:13:52.863 "uuid": "7b877e78-5079-413c-b2fc-681f886e48fc", 00:13:52.863 "is_configured": true, 00:13:52.863 "data_offset": 0, 00:13:52.863 "data_size": 65536 00:13:52.863 }, 00:13:52.863 { 00:13:52.863 "name": "BaseBdev3", 00:13:52.863 "uuid": "e2a6983f-3da2-4030-a06a-9980f135262c", 00:13:52.863 "is_configured": true, 00:13:52.863 "data_offset": 0, 00:13:52.863 "data_size": 65536 00:13:52.863 }, 00:13:52.863 { 00:13:52.863 "name": "BaseBdev4", 00:13:52.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.863 "is_configured": false, 00:13:52.863 "data_offset": 0, 00:13:52.863 "data_size": 0 00:13:52.863 } 00:13:52.863 ] 00:13:52.863 }' 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.863 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:53.430 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.430 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 [2024-11-15 10:42:23.784941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:53.430 [2024-11-15 10:42:23.785011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:53.430 [2024-11-15 10:42:23.785025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:53.430 [2024-11-15 10:42:23.785399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:53.430 [2024-11-15 10:42:23.785665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:53.430 [2024-11-15 10:42:23.785689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:53.430 [2024-11-15 10:42:23.786011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.430 BaseBdev4 00:13:53.430 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.430 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:53.430 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:53.430 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:53.430 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:53.430 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.431 [ 00:13:53.431 { 00:13:53.431 "name": "BaseBdev4", 00:13:53.431 "aliases": [ 00:13:53.431 "ee47d05d-e44a-4937-9abc-9790b593ec2e" 00:13:53.431 ], 00:13:53.431 "product_name": "Malloc disk", 00:13:53.431 "block_size": 512, 00:13:53.431 "num_blocks": 65536, 00:13:53.431 "uuid": "ee47d05d-e44a-4937-9abc-9790b593ec2e", 00:13:53.431 "assigned_rate_limits": { 00:13:53.431 "rw_ios_per_sec": 0, 00:13:53.431 "rw_mbytes_per_sec": 0, 00:13:53.431 "r_mbytes_per_sec": 0, 00:13:53.431 "w_mbytes_per_sec": 0 00:13:53.431 }, 00:13:53.431 "claimed": true, 00:13:53.431 "claim_type": "exclusive_write", 00:13:53.431 "zoned": false, 00:13:53.431 "supported_io_types": { 00:13:53.431 "read": true, 00:13:53.431 "write": true, 00:13:53.431 "unmap": true, 00:13:53.431 "flush": true, 00:13:53.431 "reset": true, 00:13:53.431 "nvme_admin": false, 00:13:53.431 "nvme_io": false, 00:13:53.431 "nvme_io_md": false, 00:13:53.431 "write_zeroes": true, 00:13:53.431 "zcopy": true, 00:13:53.431 "get_zone_info": false, 00:13:53.431 "zone_management": false, 00:13:53.431 "zone_append": false, 00:13:53.431 "compare": false, 00:13:53.431 "compare_and_write": false, 00:13:53.431 "abort": true, 00:13:53.431 "seek_hole": false, 00:13:53.431 "seek_data": false, 00:13:53.431 "copy": true, 00:13:53.431 "nvme_iov_md": false 00:13:53.431 }, 00:13:53.431 "memory_domains": [ 00:13:53.431 { 00:13:53.431 "dma_device_id": "system", 00:13:53.431 "dma_device_type": 1 00:13:53.431 }, 00:13:53.431 { 00:13:53.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.431 "dma_device_type": 2 00:13:53.431 } 00:13:53.431 ], 00:13:53.431 "driver_specific": {} 00:13:53.431 } 00:13:53.431 ] 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.431 "name": "Existed_Raid", 00:13:53.431 "uuid": "0f2fb128-61d8-4c74-ab8d-ae4512091104", 00:13:53.431 "strip_size_kb": 0, 00:13:53.431 "state": "online", 00:13:53.431 "raid_level": "raid1", 00:13:53.431 "superblock": false, 00:13:53.431 "num_base_bdevs": 4, 00:13:53.431 "num_base_bdevs_discovered": 4, 00:13:53.431 "num_base_bdevs_operational": 4, 00:13:53.431 "base_bdevs_list": [ 00:13:53.431 { 00:13:53.431 "name": "BaseBdev1", 00:13:53.431 "uuid": "09dc0ab7-675d-4133-bbfd-a2c9d72624ca", 00:13:53.431 "is_configured": true, 00:13:53.431 "data_offset": 0, 00:13:53.431 "data_size": 65536 00:13:53.431 }, 00:13:53.431 { 00:13:53.431 "name": "BaseBdev2", 00:13:53.431 "uuid": "7b877e78-5079-413c-b2fc-681f886e48fc", 00:13:53.431 "is_configured": true, 00:13:53.431 "data_offset": 0, 00:13:53.431 "data_size": 65536 00:13:53.431 }, 00:13:53.431 { 00:13:53.431 "name": "BaseBdev3", 00:13:53.431 "uuid": "e2a6983f-3da2-4030-a06a-9980f135262c", 00:13:53.431 "is_configured": true, 00:13:53.431 "data_offset": 0, 00:13:53.431 "data_size": 65536 00:13:53.431 }, 00:13:53.431 { 00:13:53.431 "name": "BaseBdev4", 00:13:53.431 "uuid": "ee47d05d-e44a-4937-9abc-9790b593ec2e", 00:13:53.431 "is_configured": true, 00:13:53.431 "data_offset": 0, 00:13:53.431 "data_size": 65536 00:13:53.431 } 00:13:53.431 ] 00:13:53.431 }' 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.431 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.998 [2024-11-15 10:42:24.281638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.998 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.998 "name": "Existed_Raid", 00:13:53.998 "aliases": [ 00:13:53.998 "0f2fb128-61d8-4c74-ab8d-ae4512091104" 00:13:53.998 ], 00:13:53.998 "product_name": "Raid Volume", 00:13:53.998 "block_size": 512, 00:13:53.998 "num_blocks": 65536, 00:13:53.998 "uuid": "0f2fb128-61d8-4c74-ab8d-ae4512091104", 00:13:53.998 "assigned_rate_limits": { 00:13:53.998 "rw_ios_per_sec": 0, 00:13:53.998 "rw_mbytes_per_sec": 0, 00:13:53.998 "r_mbytes_per_sec": 0, 00:13:53.998 "w_mbytes_per_sec": 0 00:13:53.998 }, 00:13:53.998 "claimed": false, 00:13:53.998 "zoned": false, 00:13:53.998 "supported_io_types": { 00:13:53.998 "read": true, 00:13:53.998 "write": true, 00:13:53.998 "unmap": false, 00:13:53.998 "flush": false, 00:13:53.998 "reset": true, 00:13:53.998 "nvme_admin": false, 00:13:53.998 "nvme_io": false, 00:13:53.998 "nvme_io_md": false, 00:13:53.998 "write_zeroes": true, 00:13:53.998 "zcopy": false, 00:13:53.998 "get_zone_info": false, 00:13:53.998 "zone_management": false, 00:13:53.998 "zone_append": false, 00:13:53.998 "compare": false, 00:13:53.998 "compare_and_write": false, 00:13:53.998 "abort": false, 00:13:53.998 "seek_hole": false, 00:13:53.998 "seek_data": false, 00:13:53.998 "copy": false, 00:13:53.998 "nvme_iov_md": false 00:13:53.998 }, 00:13:53.998 "memory_domains": [ 00:13:53.998 { 00:13:53.998 "dma_device_id": "system", 00:13:53.998 "dma_device_type": 1 00:13:53.998 }, 00:13:53.998 { 00:13:53.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.998 "dma_device_type": 2 00:13:53.998 }, 00:13:53.998 { 00:13:53.998 "dma_device_id": "system", 00:13:53.998 "dma_device_type": 1 00:13:53.998 }, 00:13:53.998 { 00:13:53.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.998 "dma_device_type": 2 00:13:53.998 }, 00:13:53.998 { 00:13:53.998 "dma_device_id": "system", 00:13:53.998 "dma_device_type": 1 00:13:53.998 }, 00:13:53.998 { 00:13:53.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.998 "dma_device_type": 2 00:13:53.998 }, 00:13:53.998 { 00:13:53.998 "dma_device_id": "system", 00:13:53.998 "dma_device_type": 1 00:13:53.998 }, 00:13:53.998 { 00:13:53.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.998 "dma_device_type": 2 00:13:53.998 } 00:13:53.998 ], 00:13:53.998 "driver_specific": { 00:13:53.998 "raid": { 00:13:53.998 "uuid": "0f2fb128-61d8-4c74-ab8d-ae4512091104", 00:13:53.998 "strip_size_kb": 0, 00:13:53.998 "state": "online", 00:13:53.998 "raid_level": "raid1", 00:13:53.998 "superblock": false, 00:13:53.998 "num_base_bdevs": 4, 00:13:53.998 "num_base_bdevs_discovered": 4, 00:13:53.998 "num_base_bdevs_operational": 4, 00:13:53.998 "base_bdevs_list": [ 00:13:53.998 { 00:13:53.998 "name": "BaseBdev1", 00:13:53.998 "uuid": "09dc0ab7-675d-4133-bbfd-a2c9d72624ca", 00:13:53.998 "is_configured": true, 00:13:53.998 "data_offset": 0, 00:13:53.998 "data_size": 65536 00:13:53.998 }, 00:13:53.998 { 00:13:53.998 "name": "BaseBdev2", 00:13:53.999 "uuid": "7b877e78-5079-413c-b2fc-681f886e48fc", 00:13:53.999 "is_configured": true, 00:13:53.999 "data_offset": 0, 00:13:53.999 "data_size": 65536 00:13:53.999 }, 00:13:53.999 { 00:13:53.999 "name": "BaseBdev3", 00:13:53.999 "uuid": "e2a6983f-3da2-4030-a06a-9980f135262c", 00:13:53.999 "is_configured": true, 00:13:53.999 "data_offset": 0, 00:13:53.999 "data_size": 65536 00:13:53.999 }, 00:13:53.999 { 00:13:53.999 "name": "BaseBdev4", 00:13:53.999 "uuid": "ee47d05d-e44a-4937-9abc-9790b593ec2e", 00:13:53.999 "is_configured": true, 00:13:53.999 "data_offset": 0, 00:13:53.999 "data_size": 65536 00:13:53.999 } 00:13:53.999 ] 00:13:53.999 } 00:13:53.999 } 00:13:53.999 }' 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:53.999 BaseBdev2 00:13:53.999 BaseBdev3 00:13:53.999 BaseBdev4' 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.999 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.258 [2024-11-15 10:42:24.625412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.258 "name": "Existed_Raid", 00:13:54.258 "uuid": "0f2fb128-61d8-4c74-ab8d-ae4512091104", 00:13:54.258 "strip_size_kb": 0, 00:13:54.258 "state": "online", 00:13:54.258 "raid_level": "raid1", 00:13:54.258 "superblock": false, 00:13:54.258 "num_base_bdevs": 4, 00:13:54.258 "num_base_bdevs_discovered": 3, 00:13:54.258 "num_base_bdevs_operational": 3, 00:13:54.258 "base_bdevs_list": [ 00:13:54.258 { 00:13:54.258 "name": null, 00:13:54.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.258 "is_configured": false, 00:13:54.258 "data_offset": 0, 00:13:54.258 "data_size": 65536 00:13:54.258 }, 00:13:54.258 { 00:13:54.258 "name": "BaseBdev2", 00:13:54.258 "uuid": "7b877e78-5079-413c-b2fc-681f886e48fc", 00:13:54.258 "is_configured": true, 00:13:54.258 "data_offset": 0, 00:13:54.258 "data_size": 65536 00:13:54.258 }, 00:13:54.258 { 00:13:54.258 "name": "BaseBdev3", 00:13:54.258 "uuid": "e2a6983f-3da2-4030-a06a-9980f135262c", 00:13:54.258 "is_configured": true, 00:13:54.258 "data_offset": 0, 00:13:54.258 "data_size": 65536 00:13:54.258 }, 00:13:54.258 { 00:13:54.258 "name": "BaseBdev4", 00:13:54.258 "uuid": "ee47d05d-e44a-4937-9abc-9790b593ec2e", 00:13:54.258 "is_configured": true, 00:13:54.258 "data_offset": 0, 00:13:54.258 "data_size": 65536 00:13:54.258 } 00:13:54.258 ] 00:13:54.258 }' 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.258 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.849 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:54.849 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.849 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.849 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.849 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.849 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.850 [2024-11-15 10:42:25.265766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.850 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.850 [2024-11-15 10:42:25.405953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.109 [2024-11-15 10:42:25.541561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:55.109 [2024-11-15 10:42:25.541682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.109 [2024-11-15 10:42:25.622077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.109 [2024-11-15 10:42:25.622318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.109 [2024-11-15 10:42:25.622556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:55.109 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 BaseBdev2 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 [ 00:13:55.370 { 00:13:55.370 "name": "BaseBdev2", 00:13:55.370 "aliases": [ 00:13:55.370 "34e208c5-5ae3-4fff-b3a7-4c59782ebd98" 00:13:55.370 ], 00:13:55.370 "product_name": "Malloc disk", 00:13:55.370 "block_size": 512, 00:13:55.370 "num_blocks": 65536, 00:13:55.370 "uuid": "34e208c5-5ae3-4fff-b3a7-4c59782ebd98", 00:13:55.370 "assigned_rate_limits": { 00:13:55.370 "rw_ios_per_sec": 0, 00:13:55.370 "rw_mbytes_per_sec": 0, 00:13:55.370 "r_mbytes_per_sec": 0, 00:13:55.370 "w_mbytes_per_sec": 0 00:13:55.370 }, 00:13:55.370 "claimed": false, 00:13:55.370 "zoned": false, 00:13:55.370 "supported_io_types": { 00:13:55.370 "read": true, 00:13:55.370 "write": true, 00:13:55.370 "unmap": true, 00:13:55.370 "flush": true, 00:13:55.370 "reset": true, 00:13:55.370 "nvme_admin": false, 00:13:55.370 "nvme_io": false, 00:13:55.370 "nvme_io_md": false, 00:13:55.370 "write_zeroes": true, 00:13:55.370 "zcopy": true, 00:13:55.370 "get_zone_info": false, 00:13:55.370 "zone_management": false, 00:13:55.370 "zone_append": false, 00:13:55.370 "compare": false, 00:13:55.370 "compare_and_write": false, 00:13:55.370 "abort": true, 00:13:55.370 "seek_hole": false, 00:13:55.370 "seek_data": false, 00:13:55.370 "copy": true, 00:13:55.370 "nvme_iov_md": false 00:13:55.370 }, 00:13:55.370 "memory_domains": [ 00:13:55.370 { 00:13:55.370 "dma_device_id": "system", 00:13:55.370 "dma_device_type": 1 00:13:55.370 }, 00:13:55.370 { 00:13:55.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.370 "dma_device_type": 2 00:13:55.370 } 00:13:55.370 ], 00:13:55.370 "driver_specific": {} 00:13:55.370 } 00:13:55.370 ] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 BaseBdev3 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.370 [ 00:13:55.370 { 00:13:55.370 "name": "BaseBdev3", 00:13:55.370 "aliases": [ 00:13:55.370 "17ae44f4-7a45-4414-aa7e-27f7e49191a7" 00:13:55.370 ], 00:13:55.370 "product_name": "Malloc disk", 00:13:55.370 "block_size": 512, 00:13:55.370 "num_blocks": 65536, 00:13:55.370 "uuid": "17ae44f4-7a45-4414-aa7e-27f7e49191a7", 00:13:55.370 "assigned_rate_limits": { 00:13:55.370 "rw_ios_per_sec": 0, 00:13:55.370 "rw_mbytes_per_sec": 0, 00:13:55.370 "r_mbytes_per_sec": 0, 00:13:55.370 "w_mbytes_per_sec": 0 00:13:55.370 }, 00:13:55.370 "claimed": false, 00:13:55.370 "zoned": false, 00:13:55.370 "supported_io_types": { 00:13:55.370 "read": true, 00:13:55.370 "write": true, 00:13:55.370 "unmap": true, 00:13:55.370 "flush": true, 00:13:55.370 "reset": true, 00:13:55.370 "nvme_admin": false, 00:13:55.370 "nvme_io": false, 00:13:55.370 "nvme_io_md": false, 00:13:55.370 "write_zeroes": true, 00:13:55.370 "zcopy": true, 00:13:55.370 "get_zone_info": false, 00:13:55.370 "zone_management": false, 00:13:55.370 "zone_append": false, 00:13:55.370 "compare": false, 00:13:55.370 "compare_and_write": false, 00:13:55.370 "abort": true, 00:13:55.370 "seek_hole": false, 00:13:55.370 "seek_data": false, 00:13:55.370 "copy": true, 00:13:55.370 "nvme_iov_md": false 00:13:55.370 }, 00:13:55.370 "memory_domains": [ 00:13:55.370 { 00:13:55.370 "dma_device_id": "system", 00:13:55.370 "dma_device_type": 1 00:13:55.370 }, 00:13:55.370 { 00:13:55.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.370 "dma_device_type": 2 00:13:55.370 } 00:13:55.370 ], 00:13:55.370 "driver_specific": {} 00:13:55.370 } 00:13:55.370 ] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:55.370 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.371 BaseBdev4 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.371 [ 00:13:55.371 { 00:13:55.371 "name": "BaseBdev4", 00:13:55.371 "aliases": [ 00:13:55.371 "9040303f-c240-4184-b796-7fe778d6bf4d" 00:13:55.371 ], 00:13:55.371 "product_name": "Malloc disk", 00:13:55.371 "block_size": 512, 00:13:55.371 "num_blocks": 65536, 00:13:55.371 "uuid": "9040303f-c240-4184-b796-7fe778d6bf4d", 00:13:55.371 "assigned_rate_limits": { 00:13:55.371 "rw_ios_per_sec": 0, 00:13:55.371 "rw_mbytes_per_sec": 0, 00:13:55.371 "r_mbytes_per_sec": 0, 00:13:55.371 "w_mbytes_per_sec": 0 00:13:55.371 }, 00:13:55.371 "claimed": false, 00:13:55.371 "zoned": false, 00:13:55.371 "supported_io_types": { 00:13:55.371 "read": true, 00:13:55.371 "write": true, 00:13:55.371 "unmap": true, 00:13:55.371 "flush": true, 00:13:55.371 "reset": true, 00:13:55.371 "nvme_admin": false, 00:13:55.371 "nvme_io": false, 00:13:55.371 "nvme_io_md": false, 00:13:55.371 "write_zeroes": true, 00:13:55.371 "zcopy": true, 00:13:55.371 "get_zone_info": false, 00:13:55.371 "zone_management": false, 00:13:55.371 "zone_append": false, 00:13:55.371 "compare": false, 00:13:55.371 "compare_and_write": false, 00:13:55.371 "abort": true, 00:13:55.371 "seek_hole": false, 00:13:55.371 "seek_data": false, 00:13:55.371 "copy": true, 00:13:55.371 "nvme_iov_md": false 00:13:55.371 }, 00:13:55.371 "memory_domains": [ 00:13:55.371 { 00:13:55.371 "dma_device_id": "system", 00:13:55.371 "dma_device_type": 1 00:13:55.371 }, 00:13:55.371 { 00:13:55.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.371 "dma_device_type": 2 00:13:55.371 } 00:13:55.371 ], 00:13:55.371 "driver_specific": {} 00:13:55.371 } 00:13:55.371 ] 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.371 [2024-11-15 10:42:25.892895] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.371 [2024-11-15 10:42:25.893091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.371 [2024-11-15 10:42:25.893227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.371 [2024-11-15 10:42:25.895592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.371 [2024-11-15 10:42:25.895662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.371 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.630 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.630 "name": "Existed_Raid", 00:13:55.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.630 "strip_size_kb": 0, 00:13:55.630 "state": "configuring", 00:13:55.630 "raid_level": "raid1", 00:13:55.630 "superblock": false, 00:13:55.630 "num_base_bdevs": 4, 00:13:55.630 "num_base_bdevs_discovered": 3, 00:13:55.630 "num_base_bdevs_operational": 4, 00:13:55.630 "base_bdevs_list": [ 00:13:55.630 { 00:13:55.630 "name": "BaseBdev1", 00:13:55.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.630 "is_configured": false, 00:13:55.630 "data_offset": 0, 00:13:55.630 "data_size": 0 00:13:55.630 }, 00:13:55.630 { 00:13:55.630 "name": "BaseBdev2", 00:13:55.630 "uuid": "34e208c5-5ae3-4fff-b3a7-4c59782ebd98", 00:13:55.630 "is_configured": true, 00:13:55.630 "data_offset": 0, 00:13:55.630 "data_size": 65536 00:13:55.630 }, 00:13:55.630 { 00:13:55.630 "name": "BaseBdev3", 00:13:55.630 "uuid": "17ae44f4-7a45-4414-aa7e-27f7e49191a7", 00:13:55.630 "is_configured": true, 00:13:55.630 "data_offset": 0, 00:13:55.630 "data_size": 65536 00:13:55.630 }, 00:13:55.630 { 00:13:55.630 "name": "BaseBdev4", 00:13:55.630 "uuid": "9040303f-c240-4184-b796-7fe778d6bf4d", 00:13:55.630 "is_configured": true, 00:13:55.630 "data_offset": 0, 00:13:55.630 "data_size": 65536 00:13:55.630 } 00:13:55.630 ] 00:13:55.630 }' 00:13:55.630 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.630 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.889 [2024-11-15 10:42:26.385051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.889 "name": "Existed_Raid", 00:13:55.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.889 "strip_size_kb": 0, 00:13:55.889 "state": "configuring", 00:13:55.889 "raid_level": "raid1", 00:13:55.889 "superblock": false, 00:13:55.889 "num_base_bdevs": 4, 00:13:55.889 "num_base_bdevs_discovered": 2, 00:13:55.889 "num_base_bdevs_operational": 4, 00:13:55.889 "base_bdevs_list": [ 00:13:55.889 { 00:13:55.889 "name": "BaseBdev1", 00:13:55.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.889 "is_configured": false, 00:13:55.889 "data_offset": 0, 00:13:55.889 "data_size": 0 00:13:55.889 }, 00:13:55.889 { 00:13:55.889 "name": null, 00:13:55.889 "uuid": "34e208c5-5ae3-4fff-b3a7-4c59782ebd98", 00:13:55.889 "is_configured": false, 00:13:55.889 "data_offset": 0, 00:13:55.889 "data_size": 65536 00:13:55.889 }, 00:13:55.889 { 00:13:55.889 "name": "BaseBdev3", 00:13:55.889 "uuid": "17ae44f4-7a45-4414-aa7e-27f7e49191a7", 00:13:55.889 "is_configured": true, 00:13:55.889 "data_offset": 0, 00:13:55.889 "data_size": 65536 00:13:55.889 }, 00:13:55.889 { 00:13:55.889 "name": "BaseBdev4", 00:13:55.889 "uuid": "9040303f-c240-4184-b796-7fe778d6bf4d", 00:13:55.889 "is_configured": true, 00:13:55.889 "data_offset": 0, 00:13:55.889 "data_size": 65536 00:13:55.889 } 00:13:55.889 ] 00:13:55.889 }' 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.889 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.456 [2024-11-15 10:42:26.964613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.456 BaseBdev1 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.456 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.456 [ 00:13:56.456 { 00:13:56.456 "name": "BaseBdev1", 00:13:56.456 "aliases": [ 00:13:56.456 "6571d904-4155-4611-85de-4d856f597c3a" 00:13:56.456 ], 00:13:56.456 "product_name": "Malloc disk", 00:13:56.456 "block_size": 512, 00:13:56.456 "num_blocks": 65536, 00:13:56.456 "uuid": "6571d904-4155-4611-85de-4d856f597c3a", 00:13:56.456 "assigned_rate_limits": { 00:13:56.456 "rw_ios_per_sec": 0, 00:13:56.456 "rw_mbytes_per_sec": 0, 00:13:56.456 "r_mbytes_per_sec": 0, 00:13:56.456 "w_mbytes_per_sec": 0 00:13:56.456 }, 00:13:56.456 "claimed": true, 00:13:56.456 "claim_type": "exclusive_write", 00:13:56.456 "zoned": false, 00:13:56.456 "supported_io_types": { 00:13:56.456 "read": true, 00:13:56.456 "write": true, 00:13:56.456 "unmap": true, 00:13:56.456 "flush": true, 00:13:56.456 "reset": true, 00:13:56.456 "nvme_admin": false, 00:13:56.457 "nvme_io": false, 00:13:56.457 "nvme_io_md": false, 00:13:56.457 "write_zeroes": true, 00:13:56.457 "zcopy": true, 00:13:56.457 "get_zone_info": false, 00:13:56.457 "zone_management": false, 00:13:56.457 "zone_append": false, 00:13:56.457 "compare": false, 00:13:56.457 "compare_and_write": false, 00:13:56.457 "abort": true, 00:13:56.457 "seek_hole": false, 00:13:56.457 "seek_data": false, 00:13:56.457 "copy": true, 00:13:56.457 "nvme_iov_md": false 00:13:56.457 }, 00:13:56.457 "memory_domains": [ 00:13:56.457 { 00:13:56.457 "dma_device_id": "system", 00:13:56.457 "dma_device_type": 1 00:13:56.457 }, 00:13:56.457 { 00:13:56.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.457 "dma_device_type": 2 00:13:56.457 } 00:13:56.457 ], 00:13:56.457 "driver_specific": {} 00:13:56.457 } 00:13:56.457 ] 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.457 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.457 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.457 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.457 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.457 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.715 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.715 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.715 "name": "Existed_Raid", 00:13:56.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.715 "strip_size_kb": 0, 00:13:56.715 "state": "configuring", 00:13:56.715 "raid_level": "raid1", 00:13:56.715 "superblock": false, 00:13:56.715 "num_base_bdevs": 4, 00:13:56.715 "num_base_bdevs_discovered": 3, 00:13:56.715 "num_base_bdevs_operational": 4, 00:13:56.715 "base_bdevs_list": [ 00:13:56.715 { 00:13:56.715 "name": "BaseBdev1", 00:13:56.715 "uuid": "6571d904-4155-4611-85de-4d856f597c3a", 00:13:56.715 "is_configured": true, 00:13:56.715 "data_offset": 0, 00:13:56.715 "data_size": 65536 00:13:56.715 }, 00:13:56.715 { 00:13:56.715 "name": null, 00:13:56.715 "uuid": "34e208c5-5ae3-4fff-b3a7-4c59782ebd98", 00:13:56.715 "is_configured": false, 00:13:56.715 "data_offset": 0, 00:13:56.715 "data_size": 65536 00:13:56.715 }, 00:13:56.715 { 00:13:56.715 "name": "BaseBdev3", 00:13:56.715 "uuid": "17ae44f4-7a45-4414-aa7e-27f7e49191a7", 00:13:56.715 "is_configured": true, 00:13:56.715 "data_offset": 0, 00:13:56.715 "data_size": 65536 00:13:56.715 }, 00:13:56.715 { 00:13:56.715 "name": "BaseBdev4", 00:13:56.715 "uuid": "9040303f-c240-4184-b796-7fe778d6bf4d", 00:13:56.715 "is_configured": true, 00:13:56.715 "data_offset": 0, 00:13:56.715 "data_size": 65536 00:13:56.715 } 00:13:56.715 ] 00:13:56.715 }' 00:13:56.715 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.715 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.973 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:56.973 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.973 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.973 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.973 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.973 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:56.973 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:56.973 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.973 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.973 [2024-11-15 10:42:27.528901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.232 "name": "Existed_Raid", 00:13:57.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.232 "strip_size_kb": 0, 00:13:57.232 "state": "configuring", 00:13:57.232 "raid_level": "raid1", 00:13:57.232 "superblock": false, 00:13:57.232 "num_base_bdevs": 4, 00:13:57.232 "num_base_bdevs_discovered": 2, 00:13:57.232 "num_base_bdevs_operational": 4, 00:13:57.232 "base_bdevs_list": [ 00:13:57.232 { 00:13:57.232 "name": "BaseBdev1", 00:13:57.232 "uuid": "6571d904-4155-4611-85de-4d856f597c3a", 00:13:57.232 "is_configured": true, 00:13:57.232 "data_offset": 0, 00:13:57.232 "data_size": 65536 00:13:57.232 }, 00:13:57.232 { 00:13:57.232 "name": null, 00:13:57.232 "uuid": "34e208c5-5ae3-4fff-b3a7-4c59782ebd98", 00:13:57.232 "is_configured": false, 00:13:57.232 "data_offset": 0, 00:13:57.232 "data_size": 65536 00:13:57.232 }, 00:13:57.232 { 00:13:57.232 "name": null, 00:13:57.232 "uuid": "17ae44f4-7a45-4414-aa7e-27f7e49191a7", 00:13:57.232 "is_configured": false, 00:13:57.232 "data_offset": 0, 00:13:57.232 "data_size": 65536 00:13:57.232 }, 00:13:57.232 { 00:13:57.232 "name": "BaseBdev4", 00:13:57.232 "uuid": "9040303f-c240-4184-b796-7fe778d6bf4d", 00:13:57.232 "is_configured": true, 00:13:57.232 "data_offset": 0, 00:13:57.232 "data_size": 65536 00:13:57.232 } 00:13:57.232 ] 00:13:57.232 }' 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.232 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.490 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.490 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.490 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.490 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.490 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.749 [2024-11-15 10:42:28.061026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.749 "name": "Existed_Raid", 00:13:57.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.749 "strip_size_kb": 0, 00:13:57.749 "state": "configuring", 00:13:57.749 "raid_level": "raid1", 00:13:57.749 "superblock": false, 00:13:57.749 "num_base_bdevs": 4, 00:13:57.749 "num_base_bdevs_discovered": 3, 00:13:57.749 "num_base_bdevs_operational": 4, 00:13:57.749 "base_bdevs_list": [ 00:13:57.749 { 00:13:57.749 "name": "BaseBdev1", 00:13:57.749 "uuid": "6571d904-4155-4611-85de-4d856f597c3a", 00:13:57.749 "is_configured": true, 00:13:57.749 "data_offset": 0, 00:13:57.749 "data_size": 65536 00:13:57.749 }, 00:13:57.749 { 00:13:57.749 "name": null, 00:13:57.749 "uuid": "34e208c5-5ae3-4fff-b3a7-4c59782ebd98", 00:13:57.749 "is_configured": false, 00:13:57.749 "data_offset": 0, 00:13:57.749 "data_size": 65536 00:13:57.749 }, 00:13:57.749 { 00:13:57.749 "name": "BaseBdev3", 00:13:57.749 "uuid": "17ae44f4-7a45-4414-aa7e-27f7e49191a7", 00:13:57.749 "is_configured": true, 00:13:57.749 "data_offset": 0, 00:13:57.749 "data_size": 65536 00:13:57.749 }, 00:13:57.749 { 00:13:57.749 "name": "BaseBdev4", 00:13:57.749 "uuid": "9040303f-c240-4184-b796-7fe778d6bf4d", 00:13:57.749 "is_configured": true, 00:13:57.749 "data_offset": 0, 00:13:57.749 "data_size": 65536 00:13:57.749 } 00:13:57.749 ] 00:13:57.749 }' 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.749 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.316 [2024-11-15 10:42:28.621241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.316 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.316 "name": "Existed_Raid", 00:13:58.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.316 "strip_size_kb": 0, 00:13:58.316 "state": "configuring", 00:13:58.316 "raid_level": "raid1", 00:13:58.316 "superblock": false, 00:13:58.316 "num_base_bdevs": 4, 00:13:58.316 "num_base_bdevs_discovered": 2, 00:13:58.316 "num_base_bdevs_operational": 4, 00:13:58.316 "base_bdevs_list": [ 00:13:58.316 { 00:13:58.316 "name": null, 00:13:58.316 "uuid": "6571d904-4155-4611-85de-4d856f597c3a", 00:13:58.316 "is_configured": false, 00:13:58.316 "data_offset": 0, 00:13:58.317 "data_size": 65536 00:13:58.317 }, 00:13:58.317 { 00:13:58.317 "name": null, 00:13:58.317 "uuid": "34e208c5-5ae3-4fff-b3a7-4c59782ebd98", 00:13:58.317 "is_configured": false, 00:13:58.317 "data_offset": 0, 00:13:58.317 "data_size": 65536 00:13:58.317 }, 00:13:58.317 { 00:13:58.317 "name": "BaseBdev3", 00:13:58.317 "uuid": "17ae44f4-7a45-4414-aa7e-27f7e49191a7", 00:13:58.317 "is_configured": true, 00:13:58.317 "data_offset": 0, 00:13:58.317 "data_size": 65536 00:13:58.317 }, 00:13:58.317 { 00:13:58.317 "name": "BaseBdev4", 00:13:58.317 "uuid": "9040303f-c240-4184-b796-7fe778d6bf4d", 00:13:58.317 "is_configured": true, 00:13:58.317 "data_offset": 0, 00:13:58.317 "data_size": 65536 00:13:58.317 } 00:13:58.317 ] 00:13:58.317 }' 00:13:58.317 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.317 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.884 [2024-11-15 10:42:29.286958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.884 "name": "Existed_Raid", 00:13:58.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.884 "strip_size_kb": 0, 00:13:58.884 "state": "configuring", 00:13:58.884 "raid_level": "raid1", 00:13:58.884 "superblock": false, 00:13:58.884 "num_base_bdevs": 4, 00:13:58.884 "num_base_bdevs_discovered": 3, 00:13:58.884 "num_base_bdevs_operational": 4, 00:13:58.884 "base_bdevs_list": [ 00:13:58.884 { 00:13:58.884 "name": null, 00:13:58.884 "uuid": "6571d904-4155-4611-85de-4d856f597c3a", 00:13:58.884 "is_configured": false, 00:13:58.884 "data_offset": 0, 00:13:58.884 "data_size": 65536 00:13:58.884 }, 00:13:58.884 { 00:13:58.884 "name": "BaseBdev2", 00:13:58.884 "uuid": "34e208c5-5ae3-4fff-b3a7-4c59782ebd98", 00:13:58.884 "is_configured": true, 00:13:58.884 "data_offset": 0, 00:13:58.884 "data_size": 65536 00:13:58.884 }, 00:13:58.884 { 00:13:58.884 "name": "BaseBdev3", 00:13:58.884 "uuid": "17ae44f4-7a45-4414-aa7e-27f7e49191a7", 00:13:58.884 "is_configured": true, 00:13:58.884 "data_offset": 0, 00:13:58.884 "data_size": 65536 00:13:58.884 }, 00:13:58.884 { 00:13:58.884 "name": "BaseBdev4", 00:13:58.884 "uuid": "9040303f-c240-4184-b796-7fe778d6bf4d", 00:13:58.884 "is_configured": true, 00:13:58.884 "data_offset": 0, 00:13:58.884 "data_size": 65536 00:13:58.884 } 00:13:58.884 ] 00:13:58.884 }' 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.884 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6571d904-4155-4611-85de-4d856f597c3a 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.451 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.451 [2024-11-15 10:42:29.960512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:59.451 [2024-11-15 10:42:29.960567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:59.452 [2024-11-15 10:42:29.960583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:59.452 [2024-11-15 10:42:29.960911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:59.452 [2024-11-15 10:42:29.961098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:59.452 [2024-11-15 10:42:29.961115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:59.452 [2024-11-15 10:42:29.961453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.452 NewBaseBdev 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.452 [ 00:13:59.452 { 00:13:59.452 "name": "NewBaseBdev", 00:13:59.452 "aliases": [ 00:13:59.452 "6571d904-4155-4611-85de-4d856f597c3a" 00:13:59.452 ], 00:13:59.452 "product_name": "Malloc disk", 00:13:59.452 "block_size": 512, 00:13:59.452 "num_blocks": 65536, 00:13:59.452 "uuid": "6571d904-4155-4611-85de-4d856f597c3a", 00:13:59.452 "assigned_rate_limits": { 00:13:59.452 "rw_ios_per_sec": 0, 00:13:59.452 "rw_mbytes_per_sec": 0, 00:13:59.452 "r_mbytes_per_sec": 0, 00:13:59.452 "w_mbytes_per_sec": 0 00:13:59.452 }, 00:13:59.452 "claimed": true, 00:13:59.452 "claim_type": "exclusive_write", 00:13:59.452 "zoned": false, 00:13:59.452 "supported_io_types": { 00:13:59.452 "read": true, 00:13:59.452 "write": true, 00:13:59.452 "unmap": true, 00:13:59.452 "flush": true, 00:13:59.452 "reset": true, 00:13:59.452 "nvme_admin": false, 00:13:59.452 "nvme_io": false, 00:13:59.452 "nvme_io_md": false, 00:13:59.452 "write_zeroes": true, 00:13:59.452 "zcopy": true, 00:13:59.452 "get_zone_info": false, 00:13:59.452 "zone_management": false, 00:13:59.452 "zone_append": false, 00:13:59.452 "compare": false, 00:13:59.452 "compare_and_write": false, 00:13:59.452 "abort": true, 00:13:59.452 "seek_hole": false, 00:13:59.452 "seek_data": false, 00:13:59.452 "copy": true, 00:13:59.452 "nvme_iov_md": false 00:13:59.452 }, 00:13:59.452 "memory_domains": [ 00:13:59.452 { 00:13:59.452 "dma_device_id": "system", 00:13:59.452 "dma_device_type": 1 00:13:59.452 }, 00:13:59.452 { 00:13:59.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.452 "dma_device_type": 2 00:13:59.452 } 00:13:59.452 ], 00:13:59.452 "driver_specific": {} 00:13:59.452 } 00:13:59.452 ] 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.452 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.452 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.452 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.710 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.710 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.710 "name": "Existed_Raid", 00:13:59.710 "uuid": "ca7d5441-51a1-423c-8b4c-6ea74ca6a8c9", 00:13:59.710 "strip_size_kb": 0, 00:13:59.710 "state": "online", 00:13:59.710 "raid_level": "raid1", 00:13:59.710 "superblock": false, 00:13:59.710 "num_base_bdevs": 4, 00:13:59.710 "num_base_bdevs_discovered": 4, 00:13:59.710 "num_base_bdevs_operational": 4, 00:13:59.710 "base_bdevs_list": [ 00:13:59.710 { 00:13:59.710 "name": "NewBaseBdev", 00:13:59.710 "uuid": "6571d904-4155-4611-85de-4d856f597c3a", 00:13:59.710 "is_configured": true, 00:13:59.710 "data_offset": 0, 00:13:59.710 "data_size": 65536 00:13:59.710 }, 00:13:59.710 { 00:13:59.710 "name": "BaseBdev2", 00:13:59.710 "uuid": "34e208c5-5ae3-4fff-b3a7-4c59782ebd98", 00:13:59.710 "is_configured": true, 00:13:59.710 "data_offset": 0, 00:13:59.710 "data_size": 65536 00:13:59.710 }, 00:13:59.710 { 00:13:59.710 "name": "BaseBdev3", 00:13:59.710 "uuid": "17ae44f4-7a45-4414-aa7e-27f7e49191a7", 00:13:59.710 "is_configured": true, 00:13:59.710 "data_offset": 0, 00:13:59.710 "data_size": 65536 00:13:59.710 }, 00:13:59.710 { 00:13:59.710 "name": "BaseBdev4", 00:13:59.710 "uuid": "9040303f-c240-4184-b796-7fe778d6bf4d", 00:13:59.710 "is_configured": true, 00:13:59.710 "data_offset": 0, 00:13:59.710 "data_size": 65536 00:13:59.710 } 00:13:59.710 ] 00:13:59.710 }' 00:13:59.710 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.710 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:59.969 [2024-11-15 10:42:30.505100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.969 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.227 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:00.227 "name": "Existed_Raid", 00:14:00.227 "aliases": [ 00:14:00.227 "ca7d5441-51a1-423c-8b4c-6ea74ca6a8c9" 00:14:00.227 ], 00:14:00.227 "product_name": "Raid Volume", 00:14:00.227 "block_size": 512, 00:14:00.227 "num_blocks": 65536, 00:14:00.227 "uuid": "ca7d5441-51a1-423c-8b4c-6ea74ca6a8c9", 00:14:00.227 "assigned_rate_limits": { 00:14:00.227 "rw_ios_per_sec": 0, 00:14:00.227 "rw_mbytes_per_sec": 0, 00:14:00.227 "r_mbytes_per_sec": 0, 00:14:00.227 "w_mbytes_per_sec": 0 00:14:00.227 }, 00:14:00.227 "claimed": false, 00:14:00.227 "zoned": false, 00:14:00.227 "supported_io_types": { 00:14:00.227 "read": true, 00:14:00.227 "write": true, 00:14:00.227 "unmap": false, 00:14:00.227 "flush": false, 00:14:00.227 "reset": true, 00:14:00.227 "nvme_admin": false, 00:14:00.227 "nvme_io": false, 00:14:00.227 "nvme_io_md": false, 00:14:00.227 "write_zeroes": true, 00:14:00.227 "zcopy": false, 00:14:00.227 "get_zone_info": false, 00:14:00.227 "zone_management": false, 00:14:00.227 "zone_append": false, 00:14:00.227 "compare": false, 00:14:00.227 "compare_and_write": false, 00:14:00.227 "abort": false, 00:14:00.227 "seek_hole": false, 00:14:00.227 "seek_data": false, 00:14:00.227 "copy": false, 00:14:00.227 "nvme_iov_md": false 00:14:00.227 }, 00:14:00.227 "memory_domains": [ 00:14:00.228 { 00:14:00.228 "dma_device_id": "system", 00:14:00.228 "dma_device_type": 1 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.228 "dma_device_type": 2 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "dma_device_id": "system", 00:14:00.228 "dma_device_type": 1 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.228 "dma_device_type": 2 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "dma_device_id": "system", 00:14:00.228 "dma_device_type": 1 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.228 "dma_device_type": 2 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "dma_device_id": "system", 00:14:00.228 "dma_device_type": 1 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.228 "dma_device_type": 2 00:14:00.228 } 00:14:00.228 ], 00:14:00.228 "driver_specific": { 00:14:00.228 "raid": { 00:14:00.228 "uuid": "ca7d5441-51a1-423c-8b4c-6ea74ca6a8c9", 00:14:00.228 "strip_size_kb": 0, 00:14:00.228 "state": "online", 00:14:00.228 "raid_level": "raid1", 00:14:00.228 "superblock": false, 00:14:00.228 "num_base_bdevs": 4, 00:14:00.228 "num_base_bdevs_discovered": 4, 00:14:00.228 "num_base_bdevs_operational": 4, 00:14:00.228 "base_bdevs_list": [ 00:14:00.228 { 00:14:00.228 "name": "NewBaseBdev", 00:14:00.228 "uuid": "6571d904-4155-4611-85de-4d856f597c3a", 00:14:00.228 "is_configured": true, 00:14:00.228 "data_offset": 0, 00:14:00.228 "data_size": 65536 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "name": "BaseBdev2", 00:14:00.228 "uuid": "34e208c5-5ae3-4fff-b3a7-4c59782ebd98", 00:14:00.228 "is_configured": true, 00:14:00.228 "data_offset": 0, 00:14:00.228 "data_size": 65536 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "name": "BaseBdev3", 00:14:00.228 "uuid": "17ae44f4-7a45-4414-aa7e-27f7e49191a7", 00:14:00.228 "is_configured": true, 00:14:00.228 "data_offset": 0, 00:14:00.228 "data_size": 65536 00:14:00.228 }, 00:14:00.228 { 00:14:00.228 "name": "BaseBdev4", 00:14:00.228 "uuid": "9040303f-c240-4184-b796-7fe778d6bf4d", 00:14:00.228 "is_configured": true, 00:14:00.228 "data_offset": 0, 00:14:00.228 "data_size": 65536 00:14:00.228 } 00:14:00.228 ] 00:14:00.228 } 00:14:00.228 } 00:14:00.228 }' 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:00.228 BaseBdev2 00:14:00.228 BaseBdev3 00:14:00.228 BaseBdev4' 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.228 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.487 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.487 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.488 [2024-11-15 10:42:30.872736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.488 [2024-11-15 10:42:30.872772] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.488 [2024-11-15 10:42:30.872872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.488 [2024-11-15 10:42:30.873236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.488 [2024-11-15 10:42:30.873259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73490 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73490 ']' 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73490 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73490 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73490' 00:14:00.488 killing process with pid 73490 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73490 00:14:00.488 [2024-11-15 10:42:30.912833] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.488 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73490 00:14:00.746 [2024-11-15 10:42:31.245972] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.746 10:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:01.746 ************************************ 00:14:01.746 END TEST raid_state_function_test 00:14:01.746 ************************************ 00:14:01.746 00:14:01.746 real 0m12.488s 00:14:01.746 user 0m20.929s 00:14:01.746 sys 0m1.544s 00:14:01.746 10:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:01.746 10:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.746 10:42:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:14:01.746 10:42:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:01.746 10:42:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:01.746 10:42:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:02.005 ************************************ 00:14:02.005 START TEST raid_state_function_test_sb 00:14:02.005 ************************************ 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:02.005 Process raid pid: 74167 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74167 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74167' 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74167 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74167 ']' 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:02.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:02.005 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.005 [2024-11-15 10:42:32.420211] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:14:02.005 [2024-11-15 10:42:32.420386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.263 [2024-11-15 10:42:32.594539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.263 [2024-11-15 10:42:32.738577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.522 [2024-11-15 10:42:32.961370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.522 [2024-11-15 10:42:32.961664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.090 [2024-11-15 10:42:33.466767] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.090 [2024-11-15 10:42:33.466838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.090 [2024-11-15 10:42:33.466856] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.090 [2024-11-15 10:42:33.466872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.090 [2024-11-15 10:42:33.466882] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.090 [2024-11-15 10:42:33.466896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.090 [2024-11-15 10:42:33.466905] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:03.090 [2024-11-15 10:42:33.466918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.090 "name": "Existed_Raid", 00:14:03.090 "uuid": "c5277b9c-5b19-4469-b94c-3efd35c3ab5c", 00:14:03.090 "strip_size_kb": 0, 00:14:03.090 "state": "configuring", 00:14:03.090 "raid_level": "raid1", 00:14:03.090 "superblock": true, 00:14:03.090 "num_base_bdevs": 4, 00:14:03.090 "num_base_bdevs_discovered": 0, 00:14:03.090 "num_base_bdevs_operational": 4, 00:14:03.090 "base_bdevs_list": [ 00:14:03.090 { 00:14:03.090 "name": "BaseBdev1", 00:14:03.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.090 "is_configured": false, 00:14:03.090 "data_offset": 0, 00:14:03.090 "data_size": 0 00:14:03.090 }, 00:14:03.090 { 00:14:03.090 "name": "BaseBdev2", 00:14:03.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.090 "is_configured": false, 00:14:03.090 "data_offset": 0, 00:14:03.090 "data_size": 0 00:14:03.090 }, 00:14:03.090 { 00:14:03.090 "name": "BaseBdev3", 00:14:03.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.090 "is_configured": false, 00:14:03.090 "data_offset": 0, 00:14:03.090 "data_size": 0 00:14:03.090 }, 00:14:03.090 { 00:14:03.090 "name": "BaseBdev4", 00:14:03.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.090 "is_configured": false, 00:14:03.090 "data_offset": 0, 00:14:03.090 "data_size": 0 00:14:03.090 } 00:14:03.090 ] 00:14:03.090 }' 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.090 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.657 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:03.657 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.657 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.657 [2024-11-15 10:42:33.958827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.657 [2024-11-15 10:42:33.958874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:03.657 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.657 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:03.657 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.657 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.657 [2024-11-15 10:42:33.966810] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.657 [2024-11-15 10:42:33.966862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.657 [2024-11-15 10:42:33.966876] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.657 [2024-11-15 10:42:33.966891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.657 [2024-11-15 10:42:33.966901] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.658 [2024-11-15 10:42:33.966914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.658 [2024-11-15 10:42:33.966924] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:03.658 [2024-11-15 10:42:33.966937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:03.658 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.658 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:03.658 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.658 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.658 [2024-11-15 10:42:34.007257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.658 BaseBdev1 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.658 [ 00:14:03.658 { 00:14:03.658 "name": "BaseBdev1", 00:14:03.658 "aliases": [ 00:14:03.658 "b696a74d-5a7a-409c-987f-ce2ea1492b8d" 00:14:03.658 ], 00:14:03.658 "product_name": "Malloc disk", 00:14:03.658 "block_size": 512, 00:14:03.658 "num_blocks": 65536, 00:14:03.658 "uuid": "b696a74d-5a7a-409c-987f-ce2ea1492b8d", 00:14:03.658 "assigned_rate_limits": { 00:14:03.658 "rw_ios_per_sec": 0, 00:14:03.658 "rw_mbytes_per_sec": 0, 00:14:03.658 "r_mbytes_per_sec": 0, 00:14:03.658 "w_mbytes_per_sec": 0 00:14:03.658 }, 00:14:03.658 "claimed": true, 00:14:03.658 "claim_type": "exclusive_write", 00:14:03.658 "zoned": false, 00:14:03.658 "supported_io_types": { 00:14:03.658 "read": true, 00:14:03.658 "write": true, 00:14:03.658 "unmap": true, 00:14:03.658 "flush": true, 00:14:03.658 "reset": true, 00:14:03.658 "nvme_admin": false, 00:14:03.658 "nvme_io": false, 00:14:03.658 "nvme_io_md": false, 00:14:03.658 "write_zeroes": true, 00:14:03.658 "zcopy": true, 00:14:03.658 "get_zone_info": false, 00:14:03.658 "zone_management": false, 00:14:03.658 "zone_append": false, 00:14:03.658 "compare": false, 00:14:03.658 "compare_and_write": false, 00:14:03.658 "abort": true, 00:14:03.658 "seek_hole": false, 00:14:03.658 "seek_data": false, 00:14:03.658 "copy": true, 00:14:03.658 "nvme_iov_md": false 00:14:03.658 }, 00:14:03.658 "memory_domains": [ 00:14:03.658 { 00:14:03.658 "dma_device_id": "system", 00:14:03.658 "dma_device_type": 1 00:14:03.658 }, 00:14:03.658 { 00:14:03.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.658 "dma_device_type": 2 00:14:03.658 } 00:14:03.658 ], 00:14:03.658 "driver_specific": {} 00:14:03.658 } 00:14:03.658 ] 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.658 "name": "Existed_Raid", 00:14:03.658 "uuid": "3e3a431b-4a2a-4d10-8ac0-0166fe97a191", 00:14:03.658 "strip_size_kb": 0, 00:14:03.658 "state": "configuring", 00:14:03.658 "raid_level": "raid1", 00:14:03.658 "superblock": true, 00:14:03.658 "num_base_bdevs": 4, 00:14:03.658 "num_base_bdevs_discovered": 1, 00:14:03.658 "num_base_bdevs_operational": 4, 00:14:03.658 "base_bdevs_list": [ 00:14:03.658 { 00:14:03.658 "name": "BaseBdev1", 00:14:03.658 "uuid": "b696a74d-5a7a-409c-987f-ce2ea1492b8d", 00:14:03.658 "is_configured": true, 00:14:03.658 "data_offset": 2048, 00:14:03.658 "data_size": 63488 00:14:03.658 }, 00:14:03.658 { 00:14:03.658 "name": "BaseBdev2", 00:14:03.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.658 "is_configured": false, 00:14:03.658 "data_offset": 0, 00:14:03.658 "data_size": 0 00:14:03.658 }, 00:14:03.658 { 00:14:03.658 "name": "BaseBdev3", 00:14:03.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.658 "is_configured": false, 00:14:03.658 "data_offset": 0, 00:14:03.658 "data_size": 0 00:14:03.658 }, 00:14:03.658 { 00:14:03.658 "name": "BaseBdev4", 00:14:03.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.658 "is_configured": false, 00:14:03.658 "data_offset": 0, 00:14:03.658 "data_size": 0 00:14:03.658 } 00:14:03.658 ] 00:14:03.658 }' 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.658 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.226 [2024-11-15 10:42:34.531473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.226 [2024-11-15 10:42:34.531536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.226 [2024-11-15 10:42:34.539511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.226 [2024-11-15 10:42:34.541885] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.226 [2024-11-15 10:42:34.541940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.226 [2024-11-15 10:42:34.541958] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:04.226 [2024-11-15 10:42:34.541976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:04.226 [2024-11-15 10:42:34.541986] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:04.226 [2024-11-15 10:42:34.541999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.226 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.226 "name": "Existed_Raid", 00:14:04.226 "uuid": "c397c301-2304-4491-9b39-3f5c18255f6c", 00:14:04.226 "strip_size_kb": 0, 00:14:04.226 "state": "configuring", 00:14:04.226 "raid_level": "raid1", 00:14:04.226 "superblock": true, 00:14:04.226 "num_base_bdevs": 4, 00:14:04.226 "num_base_bdevs_discovered": 1, 00:14:04.226 "num_base_bdevs_operational": 4, 00:14:04.226 "base_bdevs_list": [ 00:14:04.226 { 00:14:04.226 "name": "BaseBdev1", 00:14:04.227 "uuid": "b696a74d-5a7a-409c-987f-ce2ea1492b8d", 00:14:04.227 "is_configured": true, 00:14:04.227 "data_offset": 2048, 00:14:04.227 "data_size": 63488 00:14:04.227 }, 00:14:04.227 { 00:14:04.227 "name": "BaseBdev2", 00:14:04.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.227 "is_configured": false, 00:14:04.227 "data_offset": 0, 00:14:04.227 "data_size": 0 00:14:04.227 }, 00:14:04.227 { 00:14:04.227 "name": "BaseBdev3", 00:14:04.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.227 "is_configured": false, 00:14:04.227 "data_offset": 0, 00:14:04.227 "data_size": 0 00:14:04.227 }, 00:14:04.227 { 00:14:04.227 "name": "BaseBdev4", 00:14:04.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.227 "is_configured": false, 00:14:04.227 "data_offset": 0, 00:14:04.227 "data_size": 0 00:14:04.227 } 00:14:04.227 ] 00:14:04.227 }' 00:14:04.227 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.227 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.794 [2024-11-15 10:42:35.081720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.794 BaseBdev2 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.794 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.794 [ 00:14:04.794 { 00:14:04.794 "name": "BaseBdev2", 00:14:04.794 "aliases": [ 00:14:04.794 "642beb4a-acba-4baa-8525-dab63bb211a0" 00:14:04.794 ], 00:14:04.794 "product_name": "Malloc disk", 00:14:04.794 "block_size": 512, 00:14:04.794 "num_blocks": 65536, 00:14:04.794 "uuid": "642beb4a-acba-4baa-8525-dab63bb211a0", 00:14:04.794 "assigned_rate_limits": { 00:14:04.794 "rw_ios_per_sec": 0, 00:14:04.794 "rw_mbytes_per_sec": 0, 00:14:04.794 "r_mbytes_per_sec": 0, 00:14:04.794 "w_mbytes_per_sec": 0 00:14:04.794 }, 00:14:04.794 "claimed": true, 00:14:04.794 "claim_type": "exclusive_write", 00:14:04.794 "zoned": false, 00:14:04.794 "supported_io_types": { 00:14:04.794 "read": true, 00:14:04.794 "write": true, 00:14:04.794 "unmap": true, 00:14:04.794 "flush": true, 00:14:04.794 "reset": true, 00:14:04.794 "nvme_admin": false, 00:14:04.794 "nvme_io": false, 00:14:04.794 "nvme_io_md": false, 00:14:04.794 "write_zeroes": true, 00:14:04.794 "zcopy": true, 00:14:04.795 "get_zone_info": false, 00:14:04.795 "zone_management": false, 00:14:04.795 "zone_append": false, 00:14:04.795 "compare": false, 00:14:04.795 "compare_and_write": false, 00:14:04.795 "abort": true, 00:14:04.795 "seek_hole": false, 00:14:04.795 "seek_data": false, 00:14:04.795 "copy": true, 00:14:04.795 "nvme_iov_md": false 00:14:04.795 }, 00:14:04.795 "memory_domains": [ 00:14:04.795 { 00:14:04.795 "dma_device_id": "system", 00:14:04.795 "dma_device_type": 1 00:14:04.795 }, 00:14:04.795 { 00:14:04.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.795 "dma_device_type": 2 00:14:04.795 } 00:14:04.795 ], 00:14:04.795 "driver_specific": {} 00:14:04.795 } 00:14:04.795 ] 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.795 "name": "Existed_Raid", 00:14:04.795 "uuid": "c397c301-2304-4491-9b39-3f5c18255f6c", 00:14:04.795 "strip_size_kb": 0, 00:14:04.795 "state": "configuring", 00:14:04.795 "raid_level": "raid1", 00:14:04.795 "superblock": true, 00:14:04.795 "num_base_bdevs": 4, 00:14:04.795 "num_base_bdevs_discovered": 2, 00:14:04.795 "num_base_bdevs_operational": 4, 00:14:04.795 "base_bdevs_list": [ 00:14:04.795 { 00:14:04.795 "name": "BaseBdev1", 00:14:04.795 "uuid": "b696a74d-5a7a-409c-987f-ce2ea1492b8d", 00:14:04.795 "is_configured": true, 00:14:04.795 "data_offset": 2048, 00:14:04.795 "data_size": 63488 00:14:04.795 }, 00:14:04.795 { 00:14:04.795 "name": "BaseBdev2", 00:14:04.795 "uuid": "642beb4a-acba-4baa-8525-dab63bb211a0", 00:14:04.795 "is_configured": true, 00:14:04.795 "data_offset": 2048, 00:14:04.795 "data_size": 63488 00:14:04.795 }, 00:14:04.795 { 00:14:04.795 "name": "BaseBdev3", 00:14:04.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.795 "is_configured": false, 00:14:04.795 "data_offset": 0, 00:14:04.795 "data_size": 0 00:14:04.795 }, 00:14:04.795 { 00:14:04.795 "name": "BaseBdev4", 00:14:04.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.795 "is_configured": false, 00:14:04.795 "data_offset": 0, 00:14:04.795 "data_size": 0 00:14:04.795 } 00:14:04.795 ] 00:14:04.795 }' 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.795 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.362 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:05.362 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.362 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.362 [2024-11-15 10:42:35.705242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.363 BaseBdev3 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.363 [ 00:14:05.363 { 00:14:05.363 "name": "BaseBdev3", 00:14:05.363 "aliases": [ 00:14:05.363 "64e33c80-6468-478a-b7a2-f591d27a7dc2" 00:14:05.363 ], 00:14:05.363 "product_name": "Malloc disk", 00:14:05.363 "block_size": 512, 00:14:05.363 "num_blocks": 65536, 00:14:05.363 "uuid": "64e33c80-6468-478a-b7a2-f591d27a7dc2", 00:14:05.363 "assigned_rate_limits": { 00:14:05.363 "rw_ios_per_sec": 0, 00:14:05.363 "rw_mbytes_per_sec": 0, 00:14:05.363 "r_mbytes_per_sec": 0, 00:14:05.363 "w_mbytes_per_sec": 0 00:14:05.363 }, 00:14:05.363 "claimed": true, 00:14:05.363 "claim_type": "exclusive_write", 00:14:05.363 "zoned": false, 00:14:05.363 "supported_io_types": { 00:14:05.363 "read": true, 00:14:05.363 "write": true, 00:14:05.363 "unmap": true, 00:14:05.363 "flush": true, 00:14:05.363 "reset": true, 00:14:05.363 "nvme_admin": false, 00:14:05.363 "nvme_io": false, 00:14:05.363 "nvme_io_md": false, 00:14:05.363 "write_zeroes": true, 00:14:05.363 "zcopy": true, 00:14:05.363 "get_zone_info": false, 00:14:05.363 "zone_management": false, 00:14:05.363 "zone_append": false, 00:14:05.363 "compare": false, 00:14:05.363 "compare_and_write": false, 00:14:05.363 "abort": true, 00:14:05.363 "seek_hole": false, 00:14:05.363 "seek_data": false, 00:14:05.363 "copy": true, 00:14:05.363 "nvme_iov_md": false 00:14:05.363 }, 00:14:05.363 "memory_domains": [ 00:14:05.363 { 00:14:05.363 "dma_device_id": "system", 00:14:05.363 "dma_device_type": 1 00:14:05.363 }, 00:14:05.363 { 00:14:05.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.363 "dma_device_type": 2 00:14:05.363 } 00:14:05.363 ], 00:14:05.363 "driver_specific": {} 00:14:05.363 } 00:14:05.363 ] 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.363 "name": "Existed_Raid", 00:14:05.363 "uuid": "c397c301-2304-4491-9b39-3f5c18255f6c", 00:14:05.363 "strip_size_kb": 0, 00:14:05.363 "state": "configuring", 00:14:05.363 "raid_level": "raid1", 00:14:05.363 "superblock": true, 00:14:05.363 "num_base_bdevs": 4, 00:14:05.363 "num_base_bdevs_discovered": 3, 00:14:05.363 "num_base_bdevs_operational": 4, 00:14:05.363 "base_bdevs_list": [ 00:14:05.363 { 00:14:05.363 "name": "BaseBdev1", 00:14:05.363 "uuid": "b696a74d-5a7a-409c-987f-ce2ea1492b8d", 00:14:05.363 "is_configured": true, 00:14:05.363 "data_offset": 2048, 00:14:05.363 "data_size": 63488 00:14:05.363 }, 00:14:05.363 { 00:14:05.363 "name": "BaseBdev2", 00:14:05.363 "uuid": "642beb4a-acba-4baa-8525-dab63bb211a0", 00:14:05.363 "is_configured": true, 00:14:05.363 "data_offset": 2048, 00:14:05.363 "data_size": 63488 00:14:05.363 }, 00:14:05.363 { 00:14:05.363 "name": "BaseBdev3", 00:14:05.363 "uuid": "64e33c80-6468-478a-b7a2-f591d27a7dc2", 00:14:05.363 "is_configured": true, 00:14:05.363 "data_offset": 2048, 00:14:05.363 "data_size": 63488 00:14:05.363 }, 00:14:05.363 { 00:14:05.363 "name": "BaseBdev4", 00:14:05.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.363 "is_configured": false, 00:14:05.363 "data_offset": 0, 00:14:05.363 "data_size": 0 00:14:05.363 } 00:14:05.363 ] 00:14:05.363 }' 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.363 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.930 [2024-11-15 10:42:36.293051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.930 [2024-11-15 10:42:36.293416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:05.930 [2024-11-15 10:42:36.293438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:05.930 BaseBdev4 00:14:05.930 [2024-11-15 10:42:36.293777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:05.930 [2024-11-15 10:42:36.293992] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:05.930 [2024-11-15 10:42:36.294013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:05.930 [2024-11-15 10:42:36.294188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.930 [ 00:14:05.930 { 00:14:05.930 "name": "BaseBdev4", 00:14:05.930 "aliases": [ 00:14:05.930 "12c98ad6-405b-4ada-bd7d-7215809d6a28" 00:14:05.930 ], 00:14:05.930 "product_name": "Malloc disk", 00:14:05.930 "block_size": 512, 00:14:05.930 "num_blocks": 65536, 00:14:05.930 "uuid": "12c98ad6-405b-4ada-bd7d-7215809d6a28", 00:14:05.930 "assigned_rate_limits": { 00:14:05.930 "rw_ios_per_sec": 0, 00:14:05.930 "rw_mbytes_per_sec": 0, 00:14:05.930 "r_mbytes_per_sec": 0, 00:14:05.930 "w_mbytes_per_sec": 0 00:14:05.930 }, 00:14:05.930 "claimed": true, 00:14:05.930 "claim_type": "exclusive_write", 00:14:05.930 "zoned": false, 00:14:05.930 "supported_io_types": { 00:14:05.930 "read": true, 00:14:05.930 "write": true, 00:14:05.930 "unmap": true, 00:14:05.930 "flush": true, 00:14:05.930 "reset": true, 00:14:05.930 "nvme_admin": false, 00:14:05.930 "nvme_io": false, 00:14:05.930 "nvme_io_md": false, 00:14:05.930 "write_zeroes": true, 00:14:05.930 "zcopy": true, 00:14:05.930 "get_zone_info": false, 00:14:05.930 "zone_management": false, 00:14:05.930 "zone_append": false, 00:14:05.930 "compare": false, 00:14:05.930 "compare_and_write": false, 00:14:05.930 "abort": true, 00:14:05.930 "seek_hole": false, 00:14:05.930 "seek_data": false, 00:14:05.930 "copy": true, 00:14:05.930 "nvme_iov_md": false 00:14:05.930 }, 00:14:05.930 "memory_domains": [ 00:14:05.930 { 00:14:05.930 "dma_device_id": "system", 00:14:05.930 "dma_device_type": 1 00:14:05.930 }, 00:14:05.930 { 00:14:05.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.930 "dma_device_type": 2 00:14:05.930 } 00:14:05.930 ], 00:14:05.930 "driver_specific": {} 00:14:05.930 } 00:14:05.930 ] 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.930 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.930 "name": "Existed_Raid", 00:14:05.931 "uuid": "c397c301-2304-4491-9b39-3f5c18255f6c", 00:14:05.931 "strip_size_kb": 0, 00:14:05.931 "state": "online", 00:14:05.931 "raid_level": "raid1", 00:14:05.931 "superblock": true, 00:14:05.931 "num_base_bdevs": 4, 00:14:05.931 "num_base_bdevs_discovered": 4, 00:14:05.931 "num_base_bdevs_operational": 4, 00:14:05.931 "base_bdevs_list": [ 00:14:05.931 { 00:14:05.931 "name": "BaseBdev1", 00:14:05.931 "uuid": "b696a74d-5a7a-409c-987f-ce2ea1492b8d", 00:14:05.931 "is_configured": true, 00:14:05.931 "data_offset": 2048, 00:14:05.931 "data_size": 63488 00:14:05.931 }, 00:14:05.931 { 00:14:05.931 "name": "BaseBdev2", 00:14:05.931 "uuid": "642beb4a-acba-4baa-8525-dab63bb211a0", 00:14:05.931 "is_configured": true, 00:14:05.931 "data_offset": 2048, 00:14:05.931 "data_size": 63488 00:14:05.931 }, 00:14:05.931 { 00:14:05.931 "name": "BaseBdev3", 00:14:05.931 "uuid": "64e33c80-6468-478a-b7a2-f591d27a7dc2", 00:14:05.931 "is_configured": true, 00:14:05.931 "data_offset": 2048, 00:14:05.931 "data_size": 63488 00:14:05.931 }, 00:14:05.931 { 00:14:05.931 "name": "BaseBdev4", 00:14:05.931 "uuid": "12c98ad6-405b-4ada-bd7d-7215809d6a28", 00:14:05.931 "is_configured": true, 00:14:05.931 "data_offset": 2048, 00:14:05.931 "data_size": 63488 00:14:05.931 } 00:14:05.931 ] 00:14:05.931 }' 00:14:05.931 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.931 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.498 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:06.498 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:06.498 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:06.498 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:06.498 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:06.498 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:06.498 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:06.498 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:06.498 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.499 [2024-11-15 10:42:36.837692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:06.499 "name": "Existed_Raid", 00:14:06.499 "aliases": [ 00:14:06.499 "c397c301-2304-4491-9b39-3f5c18255f6c" 00:14:06.499 ], 00:14:06.499 "product_name": "Raid Volume", 00:14:06.499 "block_size": 512, 00:14:06.499 "num_blocks": 63488, 00:14:06.499 "uuid": "c397c301-2304-4491-9b39-3f5c18255f6c", 00:14:06.499 "assigned_rate_limits": { 00:14:06.499 "rw_ios_per_sec": 0, 00:14:06.499 "rw_mbytes_per_sec": 0, 00:14:06.499 "r_mbytes_per_sec": 0, 00:14:06.499 "w_mbytes_per_sec": 0 00:14:06.499 }, 00:14:06.499 "claimed": false, 00:14:06.499 "zoned": false, 00:14:06.499 "supported_io_types": { 00:14:06.499 "read": true, 00:14:06.499 "write": true, 00:14:06.499 "unmap": false, 00:14:06.499 "flush": false, 00:14:06.499 "reset": true, 00:14:06.499 "nvme_admin": false, 00:14:06.499 "nvme_io": false, 00:14:06.499 "nvme_io_md": false, 00:14:06.499 "write_zeroes": true, 00:14:06.499 "zcopy": false, 00:14:06.499 "get_zone_info": false, 00:14:06.499 "zone_management": false, 00:14:06.499 "zone_append": false, 00:14:06.499 "compare": false, 00:14:06.499 "compare_and_write": false, 00:14:06.499 "abort": false, 00:14:06.499 "seek_hole": false, 00:14:06.499 "seek_data": false, 00:14:06.499 "copy": false, 00:14:06.499 "nvme_iov_md": false 00:14:06.499 }, 00:14:06.499 "memory_domains": [ 00:14:06.499 { 00:14:06.499 "dma_device_id": "system", 00:14:06.499 "dma_device_type": 1 00:14:06.499 }, 00:14:06.499 { 00:14:06.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.499 "dma_device_type": 2 00:14:06.499 }, 00:14:06.499 { 00:14:06.499 "dma_device_id": "system", 00:14:06.499 "dma_device_type": 1 00:14:06.499 }, 00:14:06.499 { 00:14:06.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.499 "dma_device_type": 2 00:14:06.499 }, 00:14:06.499 { 00:14:06.499 "dma_device_id": "system", 00:14:06.499 "dma_device_type": 1 00:14:06.499 }, 00:14:06.499 { 00:14:06.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.499 "dma_device_type": 2 00:14:06.499 }, 00:14:06.499 { 00:14:06.499 "dma_device_id": "system", 00:14:06.499 "dma_device_type": 1 00:14:06.499 }, 00:14:06.499 { 00:14:06.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.499 "dma_device_type": 2 00:14:06.499 } 00:14:06.499 ], 00:14:06.499 "driver_specific": { 00:14:06.499 "raid": { 00:14:06.499 "uuid": "c397c301-2304-4491-9b39-3f5c18255f6c", 00:14:06.499 "strip_size_kb": 0, 00:14:06.499 "state": "online", 00:14:06.499 "raid_level": "raid1", 00:14:06.499 "superblock": true, 00:14:06.499 "num_base_bdevs": 4, 00:14:06.499 "num_base_bdevs_discovered": 4, 00:14:06.499 "num_base_bdevs_operational": 4, 00:14:06.499 "base_bdevs_list": [ 00:14:06.499 { 00:14:06.499 "name": "BaseBdev1", 00:14:06.499 "uuid": "b696a74d-5a7a-409c-987f-ce2ea1492b8d", 00:14:06.499 "is_configured": true, 00:14:06.499 "data_offset": 2048, 00:14:06.499 "data_size": 63488 00:14:06.499 }, 00:14:06.499 { 00:14:06.499 "name": "BaseBdev2", 00:14:06.499 "uuid": "642beb4a-acba-4baa-8525-dab63bb211a0", 00:14:06.499 "is_configured": true, 00:14:06.499 "data_offset": 2048, 00:14:06.499 "data_size": 63488 00:14:06.499 }, 00:14:06.499 { 00:14:06.499 "name": "BaseBdev3", 00:14:06.499 "uuid": "64e33c80-6468-478a-b7a2-f591d27a7dc2", 00:14:06.499 "is_configured": true, 00:14:06.499 "data_offset": 2048, 00:14:06.499 "data_size": 63488 00:14:06.499 }, 00:14:06.499 { 00:14:06.499 "name": "BaseBdev4", 00:14:06.499 "uuid": "12c98ad6-405b-4ada-bd7d-7215809d6a28", 00:14:06.499 "is_configured": true, 00:14:06.499 "data_offset": 2048, 00:14:06.499 "data_size": 63488 00:14:06.499 } 00:14:06.499 ] 00:14:06.499 } 00:14:06.499 } 00:14:06.499 }' 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:06.499 BaseBdev2 00:14:06.499 BaseBdev3 00:14:06.499 BaseBdev4' 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.499 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.499 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:06.758 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.759 [2024-11-15 10:42:37.225433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.759 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.018 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.018 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.018 "name": "Existed_Raid", 00:14:07.018 "uuid": "c397c301-2304-4491-9b39-3f5c18255f6c", 00:14:07.018 "strip_size_kb": 0, 00:14:07.018 "state": "online", 00:14:07.018 "raid_level": "raid1", 00:14:07.018 "superblock": true, 00:14:07.018 "num_base_bdevs": 4, 00:14:07.018 "num_base_bdevs_discovered": 3, 00:14:07.018 "num_base_bdevs_operational": 3, 00:14:07.018 "base_bdevs_list": [ 00:14:07.018 { 00:14:07.018 "name": null, 00:14:07.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.018 "is_configured": false, 00:14:07.018 "data_offset": 0, 00:14:07.018 "data_size": 63488 00:14:07.018 }, 00:14:07.018 { 00:14:07.018 "name": "BaseBdev2", 00:14:07.018 "uuid": "642beb4a-acba-4baa-8525-dab63bb211a0", 00:14:07.018 "is_configured": true, 00:14:07.018 "data_offset": 2048, 00:14:07.018 "data_size": 63488 00:14:07.018 }, 00:14:07.018 { 00:14:07.018 "name": "BaseBdev3", 00:14:07.018 "uuid": "64e33c80-6468-478a-b7a2-f591d27a7dc2", 00:14:07.018 "is_configured": true, 00:14:07.018 "data_offset": 2048, 00:14:07.018 "data_size": 63488 00:14:07.018 }, 00:14:07.018 { 00:14:07.018 "name": "BaseBdev4", 00:14:07.018 "uuid": "12c98ad6-405b-4ada-bd7d-7215809d6a28", 00:14:07.018 "is_configured": true, 00:14:07.018 "data_offset": 2048, 00:14:07.018 "data_size": 63488 00:14:07.018 } 00:14:07.018 ] 00:14:07.018 }' 00:14:07.018 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.018 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.277 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:07.277 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.277 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:07.277 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.277 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.277 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.536 [2024-11-15 10:42:37.893314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:07.536 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.536 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:07.536 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.536 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:07.536 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.536 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.536 [2024-11-15 10:42:38.034603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.855 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.856 [2024-11-15 10:42:38.169463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:07.856 [2024-11-15 10:42:38.169595] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.856 [2024-11-15 10:42:38.251686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.856 [2024-11-15 10:42:38.251754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.856 [2024-11-15 10:42:38.251773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.856 BaseBdev2 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.856 [ 00:14:07.856 { 00:14:07.856 "name": "BaseBdev2", 00:14:07.856 "aliases": [ 00:14:07.856 "bc00b058-1d6c-4175-87f3-52bf311683b1" 00:14:07.856 ], 00:14:07.856 "product_name": "Malloc disk", 00:14:07.856 "block_size": 512, 00:14:07.856 "num_blocks": 65536, 00:14:07.856 "uuid": "bc00b058-1d6c-4175-87f3-52bf311683b1", 00:14:07.856 "assigned_rate_limits": { 00:14:07.856 "rw_ios_per_sec": 0, 00:14:07.856 "rw_mbytes_per_sec": 0, 00:14:07.856 "r_mbytes_per_sec": 0, 00:14:07.856 "w_mbytes_per_sec": 0 00:14:07.856 }, 00:14:07.856 "claimed": false, 00:14:07.856 "zoned": false, 00:14:07.856 "supported_io_types": { 00:14:07.856 "read": true, 00:14:07.856 "write": true, 00:14:07.856 "unmap": true, 00:14:07.856 "flush": true, 00:14:07.856 "reset": true, 00:14:07.856 "nvme_admin": false, 00:14:07.856 "nvme_io": false, 00:14:07.856 "nvme_io_md": false, 00:14:07.856 "write_zeroes": true, 00:14:07.856 "zcopy": true, 00:14:07.856 "get_zone_info": false, 00:14:07.856 "zone_management": false, 00:14:07.856 "zone_append": false, 00:14:07.856 "compare": false, 00:14:07.856 "compare_and_write": false, 00:14:07.856 "abort": true, 00:14:07.856 "seek_hole": false, 00:14:07.856 "seek_data": false, 00:14:07.856 "copy": true, 00:14:07.856 "nvme_iov_md": false 00:14:07.856 }, 00:14:07.856 "memory_domains": [ 00:14:07.856 { 00:14:07.856 "dma_device_id": "system", 00:14:07.856 "dma_device_type": 1 00:14:07.856 }, 00:14:07.856 { 00:14:07.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.856 "dma_device_type": 2 00:14:07.856 } 00:14:07.856 ], 00:14:07.856 "driver_specific": {} 00:14:07.856 } 00:14:07.856 ] 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.856 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.130 BaseBdev3 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.130 [ 00:14:08.130 { 00:14:08.130 "name": "BaseBdev3", 00:14:08.130 "aliases": [ 00:14:08.130 "934994b6-e09c-469e-b1cd-28490fd710bc" 00:14:08.130 ], 00:14:08.130 "product_name": "Malloc disk", 00:14:08.130 "block_size": 512, 00:14:08.130 "num_blocks": 65536, 00:14:08.130 "uuid": "934994b6-e09c-469e-b1cd-28490fd710bc", 00:14:08.130 "assigned_rate_limits": { 00:14:08.130 "rw_ios_per_sec": 0, 00:14:08.130 "rw_mbytes_per_sec": 0, 00:14:08.130 "r_mbytes_per_sec": 0, 00:14:08.130 "w_mbytes_per_sec": 0 00:14:08.130 }, 00:14:08.130 "claimed": false, 00:14:08.130 "zoned": false, 00:14:08.130 "supported_io_types": { 00:14:08.130 "read": true, 00:14:08.130 "write": true, 00:14:08.130 "unmap": true, 00:14:08.130 "flush": true, 00:14:08.130 "reset": true, 00:14:08.130 "nvme_admin": false, 00:14:08.130 "nvme_io": false, 00:14:08.130 "nvme_io_md": false, 00:14:08.130 "write_zeroes": true, 00:14:08.130 "zcopy": true, 00:14:08.130 "get_zone_info": false, 00:14:08.130 "zone_management": false, 00:14:08.130 "zone_append": false, 00:14:08.130 "compare": false, 00:14:08.130 "compare_and_write": false, 00:14:08.130 "abort": true, 00:14:08.130 "seek_hole": false, 00:14:08.130 "seek_data": false, 00:14:08.130 "copy": true, 00:14:08.130 "nvme_iov_md": false 00:14:08.130 }, 00:14:08.130 "memory_domains": [ 00:14:08.130 { 00:14:08.130 "dma_device_id": "system", 00:14:08.130 "dma_device_type": 1 00:14:08.130 }, 00:14:08.130 { 00:14:08.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.130 "dma_device_type": 2 00:14:08.130 } 00:14:08.130 ], 00:14:08.130 "driver_specific": {} 00:14:08.130 } 00:14:08.130 ] 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.130 BaseBdev4 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.130 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.130 [ 00:14:08.130 { 00:14:08.130 "name": "BaseBdev4", 00:14:08.130 "aliases": [ 00:14:08.130 "9cff183d-2436-4b1b-9d3d-08fb04232b2f" 00:14:08.130 ], 00:14:08.130 "product_name": "Malloc disk", 00:14:08.130 "block_size": 512, 00:14:08.130 "num_blocks": 65536, 00:14:08.131 "uuid": "9cff183d-2436-4b1b-9d3d-08fb04232b2f", 00:14:08.131 "assigned_rate_limits": { 00:14:08.131 "rw_ios_per_sec": 0, 00:14:08.131 "rw_mbytes_per_sec": 0, 00:14:08.131 "r_mbytes_per_sec": 0, 00:14:08.131 "w_mbytes_per_sec": 0 00:14:08.131 }, 00:14:08.131 "claimed": false, 00:14:08.131 "zoned": false, 00:14:08.131 "supported_io_types": { 00:14:08.131 "read": true, 00:14:08.131 "write": true, 00:14:08.131 "unmap": true, 00:14:08.131 "flush": true, 00:14:08.131 "reset": true, 00:14:08.131 "nvme_admin": false, 00:14:08.131 "nvme_io": false, 00:14:08.131 "nvme_io_md": false, 00:14:08.131 "write_zeroes": true, 00:14:08.131 "zcopy": true, 00:14:08.131 "get_zone_info": false, 00:14:08.131 "zone_management": false, 00:14:08.131 "zone_append": false, 00:14:08.131 "compare": false, 00:14:08.131 "compare_and_write": false, 00:14:08.131 "abort": true, 00:14:08.131 "seek_hole": false, 00:14:08.131 "seek_data": false, 00:14:08.131 "copy": true, 00:14:08.131 "nvme_iov_md": false 00:14:08.131 }, 00:14:08.131 "memory_domains": [ 00:14:08.131 { 00:14:08.131 "dma_device_id": "system", 00:14:08.131 "dma_device_type": 1 00:14:08.131 }, 00:14:08.131 { 00:14:08.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.131 "dma_device_type": 2 00:14:08.131 } 00:14:08.131 ], 00:14:08.131 "driver_specific": {} 00:14:08.131 } 00:14:08.131 ] 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.131 [2024-11-15 10:42:38.527694] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.131 [2024-11-15 10:42:38.527890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.131 [2024-11-15 10:42:38.528020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.131 [2024-11-15 10:42:38.530398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.131 [2024-11-15 10:42:38.530633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.131 "name": "Existed_Raid", 00:14:08.131 "uuid": "a25df3db-9aa3-41e2-924a-5aba8e288596", 00:14:08.131 "strip_size_kb": 0, 00:14:08.131 "state": "configuring", 00:14:08.131 "raid_level": "raid1", 00:14:08.131 "superblock": true, 00:14:08.131 "num_base_bdevs": 4, 00:14:08.131 "num_base_bdevs_discovered": 3, 00:14:08.131 "num_base_bdevs_operational": 4, 00:14:08.131 "base_bdevs_list": [ 00:14:08.131 { 00:14:08.131 "name": "BaseBdev1", 00:14:08.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.131 "is_configured": false, 00:14:08.131 "data_offset": 0, 00:14:08.131 "data_size": 0 00:14:08.131 }, 00:14:08.131 { 00:14:08.131 "name": "BaseBdev2", 00:14:08.131 "uuid": "bc00b058-1d6c-4175-87f3-52bf311683b1", 00:14:08.131 "is_configured": true, 00:14:08.131 "data_offset": 2048, 00:14:08.131 "data_size": 63488 00:14:08.131 }, 00:14:08.131 { 00:14:08.131 "name": "BaseBdev3", 00:14:08.131 "uuid": "934994b6-e09c-469e-b1cd-28490fd710bc", 00:14:08.131 "is_configured": true, 00:14:08.131 "data_offset": 2048, 00:14:08.131 "data_size": 63488 00:14:08.131 }, 00:14:08.131 { 00:14:08.131 "name": "BaseBdev4", 00:14:08.131 "uuid": "9cff183d-2436-4b1b-9d3d-08fb04232b2f", 00:14:08.131 "is_configured": true, 00:14:08.131 "data_offset": 2048, 00:14:08.131 "data_size": 63488 00:14:08.131 } 00:14:08.131 ] 00:14:08.131 }' 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.131 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.698 [2024-11-15 10:42:39.023863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.698 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.698 "name": "Existed_Raid", 00:14:08.699 "uuid": "a25df3db-9aa3-41e2-924a-5aba8e288596", 00:14:08.699 "strip_size_kb": 0, 00:14:08.699 "state": "configuring", 00:14:08.699 "raid_level": "raid1", 00:14:08.699 "superblock": true, 00:14:08.699 "num_base_bdevs": 4, 00:14:08.699 "num_base_bdevs_discovered": 2, 00:14:08.699 "num_base_bdevs_operational": 4, 00:14:08.699 "base_bdevs_list": [ 00:14:08.699 { 00:14:08.699 "name": "BaseBdev1", 00:14:08.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.699 "is_configured": false, 00:14:08.699 "data_offset": 0, 00:14:08.699 "data_size": 0 00:14:08.699 }, 00:14:08.699 { 00:14:08.699 "name": null, 00:14:08.699 "uuid": "bc00b058-1d6c-4175-87f3-52bf311683b1", 00:14:08.699 "is_configured": false, 00:14:08.699 "data_offset": 0, 00:14:08.699 "data_size": 63488 00:14:08.699 }, 00:14:08.699 { 00:14:08.699 "name": "BaseBdev3", 00:14:08.699 "uuid": "934994b6-e09c-469e-b1cd-28490fd710bc", 00:14:08.699 "is_configured": true, 00:14:08.699 "data_offset": 2048, 00:14:08.699 "data_size": 63488 00:14:08.699 }, 00:14:08.699 { 00:14:08.699 "name": "BaseBdev4", 00:14:08.699 "uuid": "9cff183d-2436-4b1b-9d3d-08fb04232b2f", 00:14:08.699 "is_configured": true, 00:14:08.699 "data_offset": 2048, 00:14:08.699 "data_size": 63488 00:14:08.699 } 00:14:08.699 ] 00:14:08.699 }' 00:14:08.699 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.699 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.266 [2024-11-15 10:42:39.669237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.266 BaseBdev1 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.266 [ 00:14:09.266 { 00:14:09.266 "name": "BaseBdev1", 00:14:09.266 "aliases": [ 00:14:09.266 "3be5208f-ab30-43fc-9c21-a88c5d37adaa" 00:14:09.266 ], 00:14:09.266 "product_name": "Malloc disk", 00:14:09.266 "block_size": 512, 00:14:09.266 "num_blocks": 65536, 00:14:09.266 "uuid": "3be5208f-ab30-43fc-9c21-a88c5d37adaa", 00:14:09.266 "assigned_rate_limits": { 00:14:09.266 "rw_ios_per_sec": 0, 00:14:09.266 "rw_mbytes_per_sec": 0, 00:14:09.266 "r_mbytes_per_sec": 0, 00:14:09.266 "w_mbytes_per_sec": 0 00:14:09.266 }, 00:14:09.266 "claimed": true, 00:14:09.266 "claim_type": "exclusive_write", 00:14:09.266 "zoned": false, 00:14:09.266 "supported_io_types": { 00:14:09.266 "read": true, 00:14:09.266 "write": true, 00:14:09.266 "unmap": true, 00:14:09.266 "flush": true, 00:14:09.266 "reset": true, 00:14:09.266 "nvme_admin": false, 00:14:09.266 "nvme_io": false, 00:14:09.266 "nvme_io_md": false, 00:14:09.266 "write_zeroes": true, 00:14:09.266 "zcopy": true, 00:14:09.266 "get_zone_info": false, 00:14:09.266 "zone_management": false, 00:14:09.266 "zone_append": false, 00:14:09.266 "compare": false, 00:14:09.266 "compare_and_write": false, 00:14:09.266 "abort": true, 00:14:09.266 "seek_hole": false, 00:14:09.266 "seek_data": false, 00:14:09.266 "copy": true, 00:14:09.266 "nvme_iov_md": false 00:14:09.266 }, 00:14:09.266 "memory_domains": [ 00:14:09.266 { 00:14:09.266 "dma_device_id": "system", 00:14:09.266 "dma_device_type": 1 00:14:09.266 }, 00:14:09.266 { 00:14:09.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.266 "dma_device_type": 2 00:14:09.266 } 00:14:09.266 ], 00:14:09.266 "driver_specific": {} 00:14:09.266 } 00:14:09.266 ] 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.266 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.267 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.267 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.267 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.267 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.267 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.267 "name": "Existed_Raid", 00:14:09.267 "uuid": "a25df3db-9aa3-41e2-924a-5aba8e288596", 00:14:09.267 "strip_size_kb": 0, 00:14:09.267 "state": "configuring", 00:14:09.267 "raid_level": "raid1", 00:14:09.267 "superblock": true, 00:14:09.267 "num_base_bdevs": 4, 00:14:09.267 "num_base_bdevs_discovered": 3, 00:14:09.267 "num_base_bdevs_operational": 4, 00:14:09.267 "base_bdevs_list": [ 00:14:09.267 { 00:14:09.267 "name": "BaseBdev1", 00:14:09.267 "uuid": "3be5208f-ab30-43fc-9c21-a88c5d37adaa", 00:14:09.267 "is_configured": true, 00:14:09.267 "data_offset": 2048, 00:14:09.267 "data_size": 63488 00:14:09.267 }, 00:14:09.267 { 00:14:09.267 "name": null, 00:14:09.267 "uuid": "bc00b058-1d6c-4175-87f3-52bf311683b1", 00:14:09.267 "is_configured": false, 00:14:09.267 "data_offset": 0, 00:14:09.267 "data_size": 63488 00:14:09.267 }, 00:14:09.267 { 00:14:09.267 "name": "BaseBdev3", 00:14:09.267 "uuid": "934994b6-e09c-469e-b1cd-28490fd710bc", 00:14:09.267 "is_configured": true, 00:14:09.267 "data_offset": 2048, 00:14:09.267 "data_size": 63488 00:14:09.267 }, 00:14:09.267 { 00:14:09.267 "name": "BaseBdev4", 00:14:09.267 "uuid": "9cff183d-2436-4b1b-9d3d-08fb04232b2f", 00:14:09.267 "is_configured": true, 00:14:09.267 "data_offset": 2048, 00:14:09.267 "data_size": 63488 00:14:09.267 } 00:14:09.267 ] 00:14:09.267 }' 00:14:09.267 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.267 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.836 [2024-11-15 10:42:40.293522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.836 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.836 "name": "Existed_Raid", 00:14:09.836 "uuid": "a25df3db-9aa3-41e2-924a-5aba8e288596", 00:14:09.836 "strip_size_kb": 0, 00:14:09.836 "state": "configuring", 00:14:09.836 "raid_level": "raid1", 00:14:09.836 "superblock": true, 00:14:09.836 "num_base_bdevs": 4, 00:14:09.836 "num_base_bdevs_discovered": 2, 00:14:09.836 "num_base_bdevs_operational": 4, 00:14:09.836 "base_bdevs_list": [ 00:14:09.836 { 00:14:09.836 "name": "BaseBdev1", 00:14:09.836 "uuid": "3be5208f-ab30-43fc-9c21-a88c5d37adaa", 00:14:09.836 "is_configured": true, 00:14:09.836 "data_offset": 2048, 00:14:09.836 "data_size": 63488 00:14:09.836 }, 00:14:09.836 { 00:14:09.836 "name": null, 00:14:09.836 "uuid": "bc00b058-1d6c-4175-87f3-52bf311683b1", 00:14:09.837 "is_configured": false, 00:14:09.837 "data_offset": 0, 00:14:09.837 "data_size": 63488 00:14:09.837 }, 00:14:09.837 { 00:14:09.837 "name": null, 00:14:09.837 "uuid": "934994b6-e09c-469e-b1cd-28490fd710bc", 00:14:09.837 "is_configured": false, 00:14:09.837 "data_offset": 0, 00:14:09.837 "data_size": 63488 00:14:09.837 }, 00:14:09.837 { 00:14:09.837 "name": "BaseBdev4", 00:14:09.837 "uuid": "9cff183d-2436-4b1b-9d3d-08fb04232b2f", 00:14:09.837 "is_configured": true, 00:14:09.837 "data_offset": 2048, 00:14:09.837 "data_size": 63488 00:14:09.837 } 00:14:09.837 ] 00:14:09.837 }' 00:14:09.837 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.837 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.404 [2024-11-15 10:42:40.837600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.404 "name": "Existed_Raid", 00:14:10.404 "uuid": "a25df3db-9aa3-41e2-924a-5aba8e288596", 00:14:10.404 "strip_size_kb": 0, 00:14:10.404 "state": "configuring", 00:14:10.404 "raid_level": "raid1", 00:14:10.404 "superblock": true, 00:14:10.404 "num_base_bdevs": 4, 00:14:10.404 "num_base_bdevs_discovered": 3, 00:14:10.404 "num_base_bdevs_operational": 4, 00:14:10.404 "base_bdevs_list": [ 00:14:10.404 { 00:14:10.404 "name": "BaseBdev1", 00:14:10.404 "uuid": "3be5208f-ab30-43fc-9c21-a88c5d37adaa", 00:14:10.404 "is_configured": true, 00:14:10.404 "data_offset": 2048, 00:14:10.404 "data_size": 63488 00:14:10.404 }, 00:14:10.404 { 00:14:10.404 "name": null, 00:14:10.404 "uuid": "bc00b058-1d6c-4175-87f3-52bf311683b1", 00:14:10.404 "is_configured": false, 00:14:10.404 "data_offset": 0, 00:14:10.404 "data_size": 63488 00:14:10.404 }, 00:14:10.404 { 00:14:10.404 "name": "BaseBdev3", 00:14:10.404 "uuid": "934994b6-e09c-469e-b1cd-28490fd710bc", 00:14:10.404 "is_configured": true, 00:14:10.404 "data_offset": 2048, 00:14:10.404 "data_size": 63488 00:14:10.404 }, 00:14:10.404 { 00:14:10.404 "name": "BaseBdev4", 00:14:10.404 "uuid": "9cff183d-2436-4b1b-9d3d-08fb04232b2f", 00:14:10.404 "is_configured": true, 00:14:10.404 "data_offset": 2048, 00:14:10.404 "data_size": 63488 00:14:10.404 } 00:14:10.404 ] 00:14:10.404 }' 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.404 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.985 [2024-11-15 10:42:41.373809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.985 "name": "Existed_Raid", 00:14:10.985 "uuid": "a25df3db-9aa3-41e2-924a-5aba8e288596", 00:14:10.985 "strip_size_kb": 0, 00:14:10.985 "state": "configuring", 00:14:10.985 "raid_level": "raid1", 00:14:10.985 "superblock": true, 00:14:10.985 "num_base_bdevs": 4, 00:14:10.985 "num_base_bdevs_discovered": 2, 00:14:10.985 "num_base_bdevs_operational": 4, 00:14:10.985 "base_bdevs_list": [ 00:14:10.985 { 00:14:10.985 "name": null, 00:14:10.985 "uuid": "3be5208f-ab30-43fc-9c21-a88c5d37adaa", 00:14:10.985 "is_configured": false, 00:14:10.985 "data_offset": 0, 00:14:10.985 "data_size": 63488 00:14:10.985 }, 00:14:10.985 { 00:14:10.985 "name": null, 00:14:10.985 "uuid": "bc00b058-1d6c-4175-87f3-52bf311683b1", 00:14:10.985 "is_configured": false, 00:14:10.985 "data_offset": 0, 00:14:10.985 "data_size": 63488 00:14:10.985 }, 00:14:10.985 { 00:14:10.985 "name": "BaseBdev3", 00:14:10.985 "uuid": "934994b6-e09c-469e-b1cd-28490fd710bc", 00:14:10.985 "is_configured": true, 00:14:10.985 "data_offset": 2048, 00:14:10.985 "data_size": 63488 00:14:10.985 }, 00:14:10.985 { 00:14:10.985 "name": "BaseBdev4", 00:14:10.985 "uuid": "9cff183d-2436-4b1b-9d3d-08fb04232b2f", 00:14:10.985 "is_configured": true, 00:14:10.985 "data_offset": 2048, 00:14:10.985 "data_size": 63488 00:14:10.985 } 00:14:10.985 ] 00:14:10.985 }' 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.985 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.553 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:11.553 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.553 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.553 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.553 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.553 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:11.553 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:11.553 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.553 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.553 [2024-11-15 10:42:41.998938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.553 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.553 "name": "Existed_Raid", 00:14:11.553 "uuid": "a25df3db-9aa3-41e2-924a-5aba8e288596", 00:14:11.553 "strip_size_kb": 0, 00:14:11.553 "state": "configuring", 00:14:11.553 "raid_level": "raid1", 00:14:11.553 "superblock": true, 00:14:11.553 "num_base_bdevs": 4, 00:14:11.553 "num_base_bdevs_discovered": 3, 00:14:11.553 "num_base_bdevs_operational": 4, 00:14:11.553 "base_bdevs_list": [ 00:14:11.553 { 00:14:11.553 "name": null, 00:14:11.554 "uuid": "3be5208f-ab30-43fc-9c21-a88c5d37adaa", 00:14:11.554 "is_configured": false, 00:14:11.554 "data_offset": 0, 00:14:11.554 "data_size": 63488 00:14:11.554 }, 00:14:11.554 { 00:14:11.554 "name": "BaseBdev2", 00:14:11.554 "uuid": "bc00b058-1d6c-4175-87f3-52bf311683b1", 00:14:11.554 "is_configured": true, 00:14:11.554 "data_offset": 2048, 00:14:11.554 "data_size": 63488 00:14:11.554 }, 00:14:11.554 { 00:14:11.554 "name": "BaseBdev3", 00:14:11.554 "uuid": "934994b6-e09c-469e-b1cd-28490fd710bc", 00:14:11.554 "is_configured": true, 00:14:11.554 "data_offset": 2048, 00:14:11.554 "data_size": 63488 00:14:11.554 }, 00:14:11.554 { 00:14:11.554 "name": "BaseBdev4", 00:14:11.554 "uuid": "9cff183d-2436-4b1b-9d3d-08fb04232b2f", 00:14:11.554 "is_configured": true, 00:14:11.554 "data_offset": 2048, 00:14:11.554 "data_size": 63488 00:14:11.554 } 00:14:11.554 ] 00:14:11.554 }' 00:14:11.554 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.554 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3be5208f-ab30-43fc-9c21-a88c5d37adaa 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.121 [2024-11-15 10:42:42.580609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:12.121 [2024-11-15 10:42:42.581141] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:12.121 [2024-11-15 10:42:42.581174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:12.121 NewBaseBdev 00:14:12.121 [2024-11-15 10:42:42.581517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:12.121 [2024-11-15 10:42:42.581714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:12.121 [2024-11-15 10:42:42.581738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:12.121 [2024-11-15 10:42:42.581905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.121 [ 00:14:12.121 { 00:14:12.121 "name": "NewBaseBdev", 00:14:12.121 "aliases": [ 00:14:12.121 "3be5208f-ab30-43fc-9c21-a88c5d37adaa" 00:14:12.121 ], 00:14:12.121 "product_name": "Malloc disk", 00:14:12.121 "block_size": 512, 00:14:12.121 "num_blocks": 65536, 00:14:12.121 "uuid": "3be5208f-ab30-43fc-9c21-a88c5d37adaa", 00:14:12.121 "assigned_rate_limits": { 00:14:12.121 "rw_ios_per_sec": 0, 00:14:12.121 "rw_mbytes_per_sec": 0, 00:14:12.121 "r_mbytes_per_sec": 0, 00:14:12.121 "w_mbytes_per_sec": 0 00:14:12.121 }, 00:14:12.121 "claimed": true, 00:14:12.121 "claim_type": "exclusive_write", 00:14:12.121 "zoned": false, 00:14:12.121 "supported_io_types": { 00:14:12.121 "read": true, 00:14:12.121 "write": true, 00:14:12.121 "unmap": true, 00:14:12.121 "flush": true, 00:14:12.121 "reset": true, 00:14:12.121 "nvme_admin": false, 00:14:12.121 "nvme_io": false, 00:14:12.121 "nvme_io_md": false, 00:14:12.121 "write_zeroes": true, 00:14:12.121 "zcopy": true, 00:14:12.121 "get_zone_info": false, 00:14:12.121 "zone_management": false, 00:14:12.121 "zone_append": false, 00:14:12.121 "compare": false, 00:14:12.121 "compare_and_write": false, 00:14:12.121 "abort": true, 00:14:12.121 "seek_hole": false, 00:14:12.121 "seek_data": false, 00:14:12.121 "copy": true, 00:14:12.121 "nvme_iov_md": false 00:14:12.121 }, 00:14:12.121 "memory_domains": [ 00:14:12.121 { 00:14:12.121 "dma_device_id": "system", 00:14:12.121 "dma_device_type": 1 00:14:12.121 }, 00:14:12.121 { 00:14:12.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.121 "dma_device_type": 2 00:14:12.121 } 00:14:12.121 ], 00:14:12.121 "driver_specific": {} 00:14:12.121 } 00:14:12.121 ] 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.121 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.122 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.122 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.122 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.122 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.122 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.122 "name": "Existed_Raid", 00:14:12.122 "uuid": "a25df3db-9aa3-41e2-924a-5aba8e288596", 00:14:12.122 "strip_size_kb": 0, 00:14:12.122 "state": "online", 00:14:12.122 "raid_level": "raid1", 00:14:12.122 "superblock": true, 00:14:12.122 "num_base_bdevs": 4, 00:14:12.122 "num_base_bdevs_discovered": 4, 00:14:12.122 "num_base_bdevs_operational": 4, 00:14:12.122 "base_bdevs_list": [ 00:14:12.122 { 00:14:12.122 "name": "NewBaseBdev", 00:14:12.122 "uuid": "3be5208f-ab30-43fc-9c21-a88c5d37adaa", 00:14:12.122 "is_configured": true, 00:14:12.122 "data_offset": 2048, 00:14:12.122 "data_size": 63488 00:14:12.122 }, 00:14:12.122 { 00:14:12.122 "name": "BaseBdev2", 00:14:12.122 "uuid": "bc00b058-1d6c-4175-87f3-52bf311683b1", 00:14:12.122 "is_configured": true, 00:14:12.122 "data_offset": 2048, 00:14:12.122 "data_size": 63488 00:14:12.122 }, 00:14:12.122 { 00:14:12.122 "name": "BaseBdev3", 00:14:12.122 "uuid": "934994b6-e09c-469e-b1cd-28490fd710bc", 00:14:12.122 "is_configured": true, 00:14:12.122 "data_offset": 2048, 00:14:12.122 "data_size": 63488 00:14:12.122 }, 00:14:12.122 { 00:14:12.122 "name": "BaseBdev4", 00:14:12.122 "uuid": "9cff183d-2436-4b1b-9d3d-08fb04232b2f", 00:14:12.122 "is_configured": true, 00:14:12.122 "data_offset": 2048, 00:14:12.122 "data_size": 63488 00:14:12.122 } 00:14:12.122 ] 00:14:12.122 }' 00:14:12.122 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.122 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:12.689 [2024-11-15 10:42:43.105217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.689 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:12.689 "name": "Existed_Raid", 00:14:12.689 "aliases": [ 00:14:12.689 "a25df3db-9aa3-41e2-924a-5aba8e288596" 00:14:12.689 ], 00:14:12.690 "product_name": "Raid Volume", 00:14:12.690 "block_size": 512, 00:14:12.690 "num_blocks": 63488, 00:14:12.690 "uuid": "a25df3db-9aa3-41e2-924a-5aba8e288596", 00:14:12.690 "assigned_rate_limits": { 00:14:12.690 "rw_ios_per_sec": 0, 00:14:12.690 "rw_mbytes_per_sec": 0, 00:14:12.690 "r_mbytes_per_sec": 0, 00:14:12.690 "w_mbytes_per_sec": 0 00:14:12.690 }, 00:14:12.690 "claimed": false, 00:14:12.690 "zoned": false, 00:14:12.690 "supported_io_types": { 00:14:12.690 "read": true, 00:14:12.690 "write": true, 00:14:12.690 "unmap": false, 00:14:12.690 "flush": false, 00:14:12.690 "reset": true, 00:14:12.690 "nvme_admin": false, 00:14:12.690 "nvme_io": false, 00:14:12.690 "nvme_io_md": false, 00:14:12.690 "write_zeroes": true, 00:14:12.690 "zcopy": false, 00:14:12.690 "get_zone_info": false, 00:14:12.690 "zone_management": false, 00:14:12.690 "zone_append": false, 00:14:12.690 "compare": false, 00:14:12.690 "compare_and_write": false, 00:14:12.690 "abort": false, 00:14:12.690 "seek_hole": false, 00:14:12.690 "seek_data": false, 00:14:12.690 "copy": false, 00:14:12.690 "nvme_iov_md": false 00:14:12.690 }, 00:14:12.690 "memory_domains": [ 00:14:12.690 { 00:14:12.690 "dma_device_id": "system", 00:14:12.690 "dma_device_type": 1 00:14:12.690 }, 00:14:12.690 { 00:14:12.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.690 "dma_device_type": 2 00:14:12.690 }, 00:14:12.690 { 00:14:12.690 "dma_device_id": "system", 00:14:12.690 "dma_device_type": 1 00:14:12.690 }, 00:14:12.690 { 00:14:12.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.690 "dma_device_type": 2 00:14:12.690 }, 00:14:12.690 { 00:14:12.690 "dma_device_id": "system", 00:14:12.690 "dma_device_type": 1 00:14:12.690 }, 00:14:12.690 { 00:14:12.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.690 "dma_device_type": 2 00:14:12.690 }, 00:14:12.690 { 00:14:12.690 "dma_device_id": "system", 00:14:12.690 "dma_device_type": 1 00:14:12.690 }, 00:14:12.690 { 00:14:12.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.690 "dma_device_type": 2 00:14:12.690 } 00:14:12.690 ], 00:14:12.690 "driver_specific": { 00:14:12.690 "raid": { 00:14:12.690 "uuid": "a25df3db-9aa3-41e2-924a-5aba8e288596", 00:14:12.690 "strip_size_kb": 0, 00:14:12.690 "state": "online", 00:14:12.690 "raid_level": "raid1", 00:14:12.690 "superblock": true, 00:14:12.690 "num_base_bdevs": 4, 00:14:12.690 "num_base_bdevs_discovered": 4, 00:14:12.690 "num_base_bdevs_operational": 4, 00:14:12.690 "base_bdevs_list": [ 00:14:12.690 { 00:14:12.690 "name": "NewBaseBdev", 00:14:12.690 "uuid": "3be5208f-ab30-43fc-9c21-a88c5d37adaa", 00:14:12.690 "is_configured": true, 00:14:12.690 "data_offset": 2048, 00:14:12.690 "data_size": 63488 00:14:12.690 }, 00:14:12.690 { 00:14:12.690 "name": "BaseBdev2", 00:14:12.690 "uuid": "bc00b058-1d6c-4175-87f3-52bf311683b1", 00:14:12.690 "is_configured": true, 00:14:12.690 "data_offset": 2048, 00:14:12.690 "data_size": 63488 00:14:12.690 }, 00:14:12.690 { 00:14:12.690 "name": "BaseBdev3", 00:14:12.690 "uuid": "934994b6-e09c-469e-b1cd-28490fd710bc", 00:14:12.690 "is_configured": true, 00:14:12.690 "data_offset": 2048, 00:14:12.690 "data_size": 63488 00:14:12.690 }, 00:14:12.690 { 00:14:12.690 "name": "BaseBdev4", 00:14:12.690 "uuid": "9cff183d-2436-4b1b-9d3d-08fb04232b2f", 00:14:12.690 "is_configured": true, 00:14:12.690 "data_offset": 2048, 00:14:12.690 "data_size": 63488 00:14:12.690 } 00:14:12.690 ] 00:14:12.690 } 00:14:12.690 } 00:14:12.690 }' 00:14:12.690 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.690 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:12.690 BaseBdev2 00:14:12.690 BaseBdev3 00:14:12.690 BaseBdev4' 00:14:12.690 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.949 [2024-11-15 10:42:43.472867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.949 [2024-11-15 10:42:43.472900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.949 [2024-11-15 10:42:43.473002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.949 [2024-11-15 10:42:43.473385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.949 [2024-11-15 10:42:43.473409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74167 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74167 ']' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 74167 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:12.949 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74167 00:14:13.208 killing process with pid 74167 00:14:13.208 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:13.208 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:13.208 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74167' 00:14:13.208 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 74167 00:14:13.208 [2024-11-15 10:42:43.509762] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.208 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 74167 00:14:13.466 [2024-11-15 10:42:43.840024] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.400 ************************************ 00:14:14.400 END TEST raid_state_function_test_sb 00:14:14.400 ************************************ 00:14:14.400 10:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:14.400 00:14:14.400 real 0m12.525s 00:14:14.400 user 0m21.013s 00:14:14.400 sys 0m1.592s 00:14:14.400 10:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:14.400 10:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.400 10:42:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:14:14.400 10:42:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:14.400 10:42:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:14.400 10:42:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.400 ************************************ 00:14:14.400 START TEST raid_superblock_test 00:14:14.400 ************************************ 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74853 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74853 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74853 ']' 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.400 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.676 [2024-11-15 10:42:44.990702] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:14:14.676 [2024-11-15 10:42:44.990876] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74853 ] 00:14:14.676 [2024-11-15 10:42:45.172847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.951 [2024-11-15 10:42:45.280459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.951 [2024-11-15 10:42:45.461209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.951 [2024-11-15 10:42:45.461266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.518 malloc1 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.518 [2024-11-15 10:42:46.059523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:15.518 [2024-11-15 10:42:46.059751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.518 [2024-11-15 10:42:46.059832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:15.518 [2024-11-15 10:42:46.060085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.518 [2024-11-15 10:42:46.062787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.518 [2024-11-15 10:42:46.062973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:15.518 pt1 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.518 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 malloc2 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 [2024-11-15 10:42:46.112038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:15.777 [2024-11-15 10:42:46.112112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.777 [2024-11-15 10:42:46.112150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:15.777 [2024-11-15 10:42:46.112165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.777 [2024-11-15 10:42:46.114760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.777 [2024-11-15 10:42:46.114809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:15.777 pt2 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 malloc3 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 [2024-11-15 10:42:46.176852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:15.777 [2024-11-15 10:42:46.177049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.777 [2024-11-15 10:42:46.177128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:15.777 [2024-11-15 10:42:46.177241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.777 [2024-11-15 10:42:46.179858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.777 [2024-11-15 10:42:46.180021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:15.777 pt3 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 malloc4 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 [2024-11-15 10:42:46.228813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:15.777 [2024-11-15 10:42:46.229019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.777 [2024-11-15 10:42:46.229098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:15.777 [2024-11-15 10:42:46.229209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.777 [2024-11-15 10:42:46.231848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.777 [2024-11-15 10:42:46.232009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:15.777 pt4 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 [2024-11-15 10:42:46.240919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:15.777 [2024-11-15 10:42:46.243184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.777 [2024-11-15 10:42:46.243438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:15.777 [2024-11-15 10:42:46.243556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:15.777 [2024-11-15 10:42:46.243814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:15.777 [2024-11-15 10:42:46.243838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:15.777 [2024-11-15 10:42:46.244175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:15.777 [2024-11-15 10:42:46.244421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:15.777 [2024-11-15 10:42:46.244446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:15.777 [2024-11-15 10:42:46.244635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.777 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.777 "name": "raid_bdev1", 00:14:15.777 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:15.777 "strip_size_kb": 0, 00:14:15.777 "state": "online", 00:14:15.777 "raid_level": "raid1", 00:14:15.777 "superblock": true, 00:14:15.777 "num_base_bdevs": 4, 00:14:15.777 "num_base_bdevs_discovered": 4, 00:14:15.777 "num_base_bdevs_operational": 4, 00:14:15.777 "base_bdevs_list": [ 00:14:15.777 { 00:14:15.777 "name": "pt1", 00:14:15.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.777 "is_configured": true, 00:14:15.778 "data_offset": 2048, 00:14:15.778 "data_size": 63488 00:14:15.778 }, 00:14:15.778 { 00:14:15.778 "name": "pt2", 00:14:15.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.778 "is_configured": true, 00:14:15.778 "data_offset": 2048, 00:14:15.778 "data_size": 63488 00:14:15.778 }, 00:14:15.778 { 00:14:15.778 "name": "pt3", 00:14:15.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.778 "is_configured": true, 00:14:15.778 "data_offset": 2048, 00:14:15.778 "data_size": 63488 00:14:15.778 }, 00:14:15.778 { 00:14:15.778 "name": "pt4", 00:14:15.778 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:15.778 "is_configured": true, 00:14:15.778 "data_offset": 2048, 00:14:15.778 "data_size": 63488 00:14:15.778 } 00:14:15.778 ] 00:14:15.778 }' 00:14:15.778 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.778 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.343 [2024-11-15 10:42:46.781475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.343 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.343 "name": "raid_bdev1", 00:14:16.343 "aliases": [ 00:14:16.343 "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc" 00:14:16.343 ], 00:14:16.343 "product_name": "Raid Volume", 00:14:16.343 "block_size": 512, 00:14:16.343 "num_blocks": 63488, 00:14:16.343 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:16.343 "assigned_rate_limits": { 00:14:16.343 "rw_ios_per_sec": 0, 00:14:16.343 "rw_mbytes_per_sec": 0, 00:14:16.343 "r_mbytes_per_sec": 0, 00:14:16.343 "w_mbytes_per_sec": 0 00:14:16.343 }, 00:14:16.343 "claimed": false, 00:14:16.343 "zoned": false, 00:14:16.343 "supported_io_types": { 00:14:16.343 "read": true, 00:14:16.343 "write": true, 00:14:16.343 "unmap": false, 00:14:16.343 "flush": false, 00:14:16.343 "reset": true, 00:14:16.343 "nvme_admin": false, 00:14:16.343 "nvme_io": false, 00:14:16.343 "nvme_io_md": false, 00:14:16.343 "write_zeroes": true, 00:14:16.343 "zcopy": false, 00:14:16.343 "get_zone_info": false, 00:14:16.343 "zone_management": false, 00:14:16.343 "zone_append": false, 00:14:16.343 "compare": false, 00:14:16.343 "compare_and_write": false, 00:14:16.343 "abort": false, 00:14:16.343 "seek_hole": false, 00:14:16.343 "seek_data": false, 00:14:16.343 "copy": false, 00:14:16.343 "nvme_iov_md": false 00:14:16.343 }, 00:14:16.343 "memory_domains": [ 00:14:16.343 { 00:14:16.343 "dma_device_id": "system", 00:14:16.343 "dma_device_type": 1 00:14:16.343 }, 00:14:16.343 { 00:14:16.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.343 "dma_device_type": 2 00:14:16.343 }, 00:14:16.343 { 00:14:16.343 "dma_device_id": "system", 00:14:16.343 "dma_device_type": 1 00:14:16.343 }, 00:14:16.343 { 00:14:16.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.344 "dma_device_type": 2 00:14:16.344 }, 00:14:16.344 { 00:14:16.344 "dma_device_id": "system", 00:14:16.344 "dma_device_type": 1 00:14:16.344 }, 00:14:16.344 { 00:14:16.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.344 "dma_device_type": 2 00:14:16.344 }, 00:14:16.344 { 00:14:16.344 "dma_device_id": "system", 00:14:16.344 "dma_device_type": 1 00:14:16.344 }, 00:14:16.344 { 00:14:16.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.344 "dma_device_type": 2 00:14:16.344 } 00:14:16.344 ], 00:14:16.344 "driver_specific": { 00:14:16.344 "raid": { 00:14:16.344 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:16.344 "strip_size_kb": 0, 00:14:16.344 "state": "online", 00:14:16.344 "raid_level": "raid1", 00:14:16.344 "superblock": true, 00:14:16.344 "num_base_bdevs": 4, 00:14:16.344 "num_base_bdevs_discovered": 4, 00:14:16.344 "num_base_bdevs_operational": 4, 00:14:16.344 "base_bdevs_list": [ 00:14:16.344 { 00:14:16.344 "name": "pt1", 00:14:16.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.344 "is_configured": true, 00:14:16.344 "data_offset": 2048, 00:14:16.344 "data_size": 63488 00:14:16.344 }, 00:14:16.344 { 00:14:16.344 "name": "pt2", 00:14:16.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.344 "is_configured": true, 00:14:16.344 "data_offset": 2048, 00:14:16.344 "data_size": 63488 00:14:16.344 }, 00:14:16.344 { 00:14:16.344 "name": "pt3", 00:14:16.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.344 "is_configured": true, 00:14:16.344 "data_offset": 2048, 00:14:16.344 "data_size": 63488 00:14:16.344 }, 00:14:16.344 { 00:14:16.344 "name": "pt4", 00:14:16.344 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:16.344 "is_configured": true, 00:14:16.344 "data_offset": 2048, 00:14:16.344 "data_size": 63488 00:14:16.344 } 00:14:16.344 ] 00:14:16.344 } 00:14:16.344 } 00:14:16.344 }' 00:14:16.344 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.344 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:16.344 pt2 00:14:16.344 pt3 00:14:16.344 pt4' 00:14:16.344 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.602 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:16.602 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.860 [2024-11-15 10:42:47.165525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc ']' 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.860 [2024-11-15 10:42:47.221142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.860 [2024-11-15 10:42:47.221174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.860 [2024-11-15 10:42:47.221266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.860 [2024-11-15 10:42:47.221406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.860 [2024-11-15 10:42:47.221433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.860 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.861 [2024-11-15 10:42:47.365210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:16.861 [2024-11-15 10:42:47.367570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:16.861 [2024-11-15 10:42:47.367646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:16.861 [2024-11-15 10:42:47.367702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:16.861 [2024-11-15 10:42:47.367773] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:16.861 [2024-11-15 10:42:47.367859] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:16.861 [2024-11-15 10:42:47.367892] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:16.861 [2024-11-15 10:42:47.367924] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:16.861 [2024-11-15 10:42:47.367945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.861 [2024-11-15 10:42:47.367962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:16.861 request: 00:14:16.861 { 00:14:16.861 "name": "raid_bdev1", 00:14:16.861 "raid_level": "raid1", 00:14:16.861 "base_bdevs": [ 00:14:16.861 "malloc1", 00:14:16.861 "malloc2", 00:14:16.861 "malloc3", 00:14:16.861 "malloc4" 00:14:16.861 ], 00:14:16.861 "superblock": false, 00:14:16.861 "method": "bdev_raid_create", 00:14:16.861 "req_id": 1 00:14:16.861 } 00:14:16.861 Got JSON-RPC error response 00:14:16.861 response: 00:14:16.861 { 00:14:16.861 "code": -17, 00:14:16.861 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:16.861 } 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.861 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.119 [2024-11-15 10:42:47.437247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:17.119 [2024-11-15 10:42:47.437541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.119 [2024-11-15 10:42:47.437700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:17.119 [2024-11-15 10:42:47.437849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.119 [2024-11-15 10:42:47.440760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.119 [2024-11-15 10:42:47.440933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:17.119 [2024-11-15 10:42:47.441158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:17.119 [2024-11-15 10:42:47.441384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:17.119 pt1 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.119 "name": "raid_bdev1", 00:14:17.119 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:17.119 "strip_size_kb": 0, 00:14:17.119 "state": "configuring", 00:14:17.119 "raid_level": "raid1", 00:14:17.119 "superblock": true, 00:14:17.119 "num_base_bdevs": 4, 00:14:17.119 "num_base_bdevs_discovered": 1, 00:14:17.119 "num_base_bdevs_operational": 4, 00:14:17.119 "base_bdevs_list": [ 00:14:17.119 { 00:14:17.119 "name": "pt1", 00:14:17.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:17.119 "is_configured": true, 00:14:17.119 "data_offset": 2048, 00:14:17.119 "data_size": 63488 00:14:17.119 }, 00:14:17.119 { 00:14:17.119 "name": null, 00:14:17.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.119 "is_configured": false, 00:14:17.119 "data_offset": 2048, 00:14:17.119 "data_size": 63488 00:14:17.119 }, 00:14:17.119 { 00:14:17.119 "name": null, 00:14:17.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.119 "is_configured": false, 00:14:17.119 "data_offset": 2048, 00:14:17.119 "data_size": 63488 00:14:17.119 }, 00:14:17.119 { 00:14:17.119 "name": null, 00:14:17.119 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:17.119 "is_configured": false, 00:14:17.119 "data_offset": 2048, 00:14:17.119 "data_size": 63488 00:14:17.119 } 00:14:17.119 ] 00:14:17.119 }' 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.119 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.378 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.636 [2024-11-15 10:42:47.941434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:17.636 [2024-11-15 10:42:47.941674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.636 [2024-11-15 10:42:47.941717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:17.636 [2024-11-15 10:42:47.941736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.636 [2024-11-15 10:42:47.942267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.636 [2024-11-15 10:42:47.942304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:17.636 [2024-11-15 10:42:47.942424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:17.636 [2024-11-15 10:42:47.942471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:17.636 pt2 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.636 [2024-11-15 10:42:47.949405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.636 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.636 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.636 "name": "raid_bdev1", 00:14:17.636 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:17.636 "strip_size_kb": 0, 00:14:17.636 "state": "configuring", 00:14:17.636 "raid_level": "raid1", 00:14:17.636 "superblock": true, 00:14:17.636 "num_base_bdevs": 4, 00:14:17.636 "num_base_bdevs_discovered": 1, 00:14:17.636 "num_base_bdevs_operational": 4, 00:14:17.636 "base_bdevs_list": [ 00:14:17.636 { 00:14:17.636 "name": "pt1", 00:14:17.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:17.636 "is_configured": true, 00:14:17.636 "data_offset": 2048, 00:14:17.636 "data_size": 63488 00:14:17.636 }, 00:14:17.636 { 00:14:17.636 "name": null, 00:14:17.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.636 "is_configured": false, 00:14:17.636 "data_offset": 0, 00:14:17.636 "data_size": 63488 00:14:17.636 }, 00:14:17.636 { 00:14:17.636 "name": null, 00:14:17.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.636 "is_configured": false, 00:14:17.636 "data_offset": 2048, 00:14:17.637 "data_size": 63488 00:14:17.637 }, 00:14:17.637 { 00:14:17.637 "name": null, 00:14:17.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:17.637 "is_configured": false, 00:14:17.637 "data_offset": 2048, 00:14:17.637 "data_size": 63488 00:14:17.637 } 00:14:17.637 ] 00:14:17.637 }' 00:14:17.637 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.637 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.895 [2024-11-15 10:42:48.437541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:17.895 [2024-11-15 10:42:48.437628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.895 [2024-11-15 10:42:48.437659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:17.895 [2024-11-15 10:42:48.437673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.895 [2024-11-15 10:42:48.438226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.895 [2024-11-15 10:42:48.438253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:17.895 [2024-11-15 10:42:48.438381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:17.895 [2024-11-15 10:42:48.438416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:17.895 pt2 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.895 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.895 [2024-11-15 10:42:48.449516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:17.895 [2024-11-15 10:42:48.449578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.895 [2024-11-15 10:42:48.449607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:17.895 [2024-11-15 10:42:48.449621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.895 [2024-11-15 10:42:48.450063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.895 [2024-11-15 10:42:48.450096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:17.895 [2024-11-15 10:42:48.450178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:17.895 [2024-11-15 10:42:48.450207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:18.153 pt3 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.153 [2024-11-15 10:42:48.457493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:18.153 [2024-11-15 10:42:48.457551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.153 [2024-11-15 10:42:48.457578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:18.153 [2024-11-15 10:42:48.457592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.153 [2024-11-15 10:42:48.458049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.153 [2024-11-15 10:42:48.458091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:18.153 [2024-11-15 10:42:48.458176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:18.153 [2024-11-15 10:42:48.458215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:18.153 [2024-11-15 10:42:48.458417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:18.153 [2024-11-15 10:42:48.458435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:18.153 [2024-11-15 10:42:48.458748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:18.153 [2024-11-15 10:42:48.458954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:18.153 [2024-11-15 10:42:48.458989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:18.153 [2024-11-15 10:42:48.459164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.153 pt4 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.153 "name": "raid_bdev1", 00:14:18.153 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:18.153 "strip_size_kb": 0, 00:14:18.153 "state": "online", 00:14:18.153 "raid_level": "raid1", 00:14:18.153 "superblock": true, 00:14:18.153 "num_base_bdevs": 4, 00:14:18.153 "num_base_bdevs_discovered": 4, 00:14:18.153 "num_base_bdevs_operational": 4, 00:14:18.153 "base_bdevs_list": [ 00:14:18.153 { 00:14:18.153 "name": "pt1", 00:14:18.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:18.153 "is_configured": true, 00:14:18.153 "data_offset": 2048, 00:14:18.153 "data_size": 63488 00:14:18.153 }, 00:14:18.153 { 00:14:18.153 "name": "pt2", 00:14:18.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.153 "is_configured": true, 00:14:18.153 "data_offset": 2048, 00:14:18.153 "data_size": 63488 00:14:18.153 }, 00:14:18.153 { 00:14:18.153 "name": "pt3", 00:14:18.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.153 "is_configured": true, 00:14:18.153 "data_offset": 2048, 00:14:18.153 "data_size": 63488 00:14:18.153 }, 00:14:18.153 { 00:14:18.153 "name": "pt4", 00:14:18.153 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:18.153 "is_configured": true, 00:14:18.153 "data_offset": 2048, 00:14:18.153 "data_size": 63488 00:14:18.153 } 00:14:18.153 ] 00:14:18.153 }' 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.153 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.412 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:18.412 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:18.412 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:18.412 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:18.412 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:18.412 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:18.412 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:18.412 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:18.412 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.412 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.412 [2024-11-15 10:42:48.962110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.670 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.670 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:18.670 "name": "raid_bdev1", 00:14:18.670 "aliases": [ 00:14:18.670 "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc" 00:14:18.670 ], 00:14:18.670 "product_name": "Raid Volume", 00:14:18.670 "block_size": 512, 00:14:18.670 "num_blocks": 63488, 00:14:18.670 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:18.670 "assigned_rate_limits": { 00:14:18.670 "rw_ios_per_sec": 0, 00:14:18.670 "rw_mbytes_per_sec": 0, 00:14:18.670 "r_mbytes_per_sec": 0, 00:14:18.670 "w_mbytes_per_sec": 0 00:14:18.670 }, 00:14:18.670 "claimed": false, 00:14:18.670 "zoned": false, 00:14:18.670 "supported_io_types": { 00:14:18.670 "read": true, 00:14:18.670 "write": true, 00:14:18.670 "unmap": false, 00:14:18.670 "flush": false, 00:14:18.670 "reset": true, 00:14:18.670 "nvme_admin": false, 00:14:18.670 "nvme_io": false, 00:14:18.670 "nvme_io_md": false, 00:14:18.670 "write_zeroes": true, 00:14:18.670 "zcopy": false, 00:14:18.670 "get_zone_info": false, 00:14:18.670 "zone_management": false, 00:14:18.670 "zone_append": false, 00:14:18.670 "compare": false, 00:14:18.670 "compare_and_write": false, 00:14:18.670 "abort": false, 00:14:18.670 "seek_hole": false, 00:14:18.670 "seek_data": false, 00:14:18.670 "copy": false, 00:14:18.670 "nvme_iov_md": false 00:14:18.670 }, 00:14:18.670 "memory_domains": [ 00:14:18.670 { 00:14:18.670 "dma_device_id": "system", 00:14:18.670 "dma_device_type": 1 00:14:18.670 }, 00:14:18.670 { 00:14:18.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.670 "dma_device_type": 2 00:14:18.670 }, 00:14:18.670 { 00:14:18.670 "dma_device_id": "system", 00:14:18.670 "dma_device_type": 1 00:14:18.670 }, 00:14:18.670 { 00:14:18.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.670 "dma_device_type": 2 00:14:18.670 }, 00:14:18.670 { 00:14:18.670 "dma_device_id": "system", 00:14:18.670 "dma_device_type": 1 00:14:18.670 }, 00:14:18.670 { 00:14:18.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.670 "dma_device_type": 2 00:14:18.670 }, 00:14:18.670 { 00:14:18.670 "dma_device_id": "system", 00:14:18.670 "dma_device_type": 1 00:14:18.670 }, 00:14:18.670 { 00:14:18.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.670 "dma_device_type": 2 00:14:18.670 } 00:14:18.670 ], 00:14:18.670 "driver_specific": { 00:14:18.671 "raid": { 00:14:18.671 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:18.671 "strip_size_kb": 0, 00:14:18.671 "state": "online", 00:14:18.671 "raid_level": "raid1", 00:14:18.671 "superblock": true, 00:14:18.671 "num_base_bdevs": 4, 00:14:18.671 "num_base_bdevs_discovered": 4, 00:14:18.671 "num_base_bdevs_operational": 4, 00:14:18.671 "base_bdevs_list": [ 00:14:18.671 { 00:14:18.671 "name": "pt1", 00:14:18.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:18.671 "is_configured": true, 00:14:18.671 "data_offset": 2048, 00:14:18.671 "data_size": 63488 00:14:18.671 }, 00:14:18.671 { 00:14:18.671 "name": "pt2", 00:14:18.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.671 "is_configured": true, 00:14:18.671 "data_offset": 2048, 00:14:18.671 "data_size": 63488 00:14:18.671 }, 00:14:18.671 { 00:14:18.671 "name": "pt3", 00:14:18.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.671 "is_configured": true, 00:14:18.671 "data_offset": 2048, 00:14:18.671 "data_size": 63488 00:14:18.671 }, 00:14:18.671 { 00:14:18.671 "name": "pt4", 00:14:18.671 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:18.671 "is_configured": true, 00:14:18.671 "data_offset": 2048, 00:14:18.671 "data_size": 63488 00:14:18.671 } 00:14:18.671 ] 00:14:18.671 } 00:14:18.671 } 00:14:18.671 }' 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:18.671 pt2 00:14:18.671 pt3 00:14:18.671 pt4' 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.671 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.929 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.930 [2024-11-15 10:42:49.362178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc '!=' 6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc ']' 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.930 [2024-11-15 10:42:49.413887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.930 "name": "raid_bdev1", 00:14:18.930 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:18.930 "strip_size_kb": 0, 00:14:18.930 "state": "online", 00:14:18.930 "raid_level": "raid1", 00:14:18.930 "superblock": true, 00:14:18.930 "num_base_bdevs": 4, 00:14:18.930 "num_base_bdevs_discovered": 3, 00:14:18.930 "num_base_bdevs_operational": 3, 00:14:18.930 "base_bdevs_list": [ 00:14:18.930 { 00:14:18.930 "name": null, 00:14:18.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.930 "is_configured": false, 00:14:18.930 "data_offset": 0, 00:14:18.930 "data_size": 63488 00:14:18.930 }, 00:14:18.930 { 00:14:18.930 "name": "pt2", 00:14:18.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.930 "is_configured": true, 00:14:18.930 "data_offset": 2048, 00:14:18.930 "data_size": 63488 00:14:18.930 }, 00:14:18.930 { 00:14:18.930 "name": "pt3", 00:14:18.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.930 "is_configured": true, 00:14:18.930 "data_offset": 2048, 00:14:18.930 "data_size": 63488 00:14:18.930 }, 00:14:18.930 { 00:14:18.930 "name": "pt4", 00:14:18.930 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:18.930 "is_configured": true, 00:14:18.930 "data_offset": 2048, 00:14:18.930 "data_size": 63488 00:14:18.930 } 00:14:18.930 ] 00:14:18.930 }' 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.930 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.496 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:19.496 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.496 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.496 [2024-11-15 10:42:49.925979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.497 [2024-11-15 10:42:49.926160] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.497 [2024-11-15 10:42:49.926285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.497 [2024-11-15 10:42:49.926412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.497 [2024-11-15 10:42:49.926433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.497 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.497 [2024-11-15 10:42:50.022053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:19.497 [2024-11-15 10:42:50.022156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.497 [2024-11-15 10:42:50.022201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:19.497 [2024-11-15 10:42:50.022224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.497 [2024-11-15 10:42:50.025922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.497 [2024-11-15 10:42:50.026147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:19.497 [2024-11-15 10:42:50.026322] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:19.497 [2024-11-15 10:42:50.026439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:19.497 pt2 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.497 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.755 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.755 "name": "raid_bdev1", 00:14:19.755 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:19.755 "strip_size_kb": 0, 00:14:19.755 "state": "configuring", 00:14:19.755 "raid_level": "raid1", 00:14:19.755 "superblock": true, 00:14:19.755 "num_base_bdevs": 4, 00:14:19.755 "num_base_bdevs_discovered": 1, 00:14:19.755 "num_base_bdevs_operational": 3, 00:14:19.755 "base_bdevs_list": [ 00:14:19.755 { 00:14:19.755 "name": null, 00:14:19.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.755 "is_configured": false, 00:14:19.755 "data_offset": 2048, 00:14:19.755 "data_size": 63488 00:14:19.755 }, 00:14:19.755 { 00:14:19.755 "name": "pt2", 00:14:19.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.755 "is_configured": true, 00:14:19.755 "data_offset": 2048, 00:14:19.755 "data_size": 63488 00:14:19.755 }, 00:14:19.755 { 00:14:19.755 "name": null, 00:14:19.755 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.755 "is_configured": false, 00:14:19.755 "data_offset": 2048, 00:14:19.755 "data_size": 63488 00:14:19.755 }, 00:14:19.755 { 00:14:19.755 "name": null, 00:14:19.755 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:19.755 "is_configured": false, 00:14:19.755 "data_offset": 2048, 00:14:19.755 "data_size": 63488 00:14:19.755 } 00:14:19.755 ] 00:14:19.755 }' 00:14:19.755 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.755 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.013 [2024-11-15 10:42:50.546550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:20.013 [2024-11-15 10:42:50.546631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.013 [2024-11-15 10:42:50.546671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:20.013 [2024-11-15 10:42:50.546686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.013 [2024-11-15 10:42:50.547275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.013 [2024-11-15 10:42:50.547301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:20.013 [2024-11-15 10:42:50.547430] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:20.013 [2024-11-15 10:42:50.547466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:20.013 pt3 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.013 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.271 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.271 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.271 "name": "raid_bdev1", 00:14:20.271 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:20.271 "strip_size_kb": 0, 00:14:20.271 "state": "configuring", 00:14:20.271 "raid_level": "raid1", 00:14:20.271 "superblock": true, 00:14:20.271 "num_base_bdevs": 4, 00:14:20.271 "num_base_bdevs_discovered": 2, 00:14:20.271 "num_base_bdevs_operational": 3, 00:14:20.271 "base_bdevs_list": [ 00:14:20.271 { 00:14:20.271 "name": null, 00:14:20.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.271 "is_configured": false, 00:14:20.271 "data_offset": 2048, 00:14:20.271 "data_size": 63488 00:14:20.271 }, 00:14:20.271 { 00:14:20.271 "name": "pt2", 00:14:20.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.271 "is_configured": true, 00:14:20.271 "data_offset": 2048, 00:14:20.271 "data_size": 63488 00:14:20.271 }, 00:14:20.271 { 00:14:20.271 "name": "pt3", 00:14:20.271 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.271 "is_configured": true, 00:14:20.271 "data_offset": 2048, 00:14:20.271 "data_size": 63488 00:14:20.271 }, 00:14:20.271 { 00:14:20.271 "name": null, 00:14:20.271 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:20.271 "is_configured": false, 00:14:20.271 "data_offset": 2048, 00:14:20.271 "data_size": 63488 00:14:20.271 } 00:14:20.271 ] 00:14:20.271 }' 00:14:20.271 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.271 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.529 [2024-11-15 10:42:51.046693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:20.529 [2024-11-15 10:42:51.046790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.529 [2024-11-15 10:42:51.046831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:20.529 [2024-11-15 10:42:51.046846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.529 [2024-11-15 10:42:51.047420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.529 [2024-11-15 10:42:51.047448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:20.529 [2024-11-15 10:42:51.047552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:20.529 [2024-11-15 10:42:51.047586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:20.529 [2024-11-15 10:42:51.047761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:20.529 [2024-11-15 10:42:51.047778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.529 [2024-11-15 10:42:51.048078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:20.529 [2024-11-15 10:42:51.048283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:20.529 [2024-11-15 10:42:51.048306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:20.529 [2024-11-15 10:42:51.048501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.529 pt4 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.529 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.789 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.789 "name": "raid_bdev1", 00:14:20.789 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:20.789 "strip_size_kb": 0, 00:14:20.789 "state": "online", 00:14:20.789 "raid_level": "raid1", 00:14:20.789 "superblock": true, 00:14:20.789 "num_base_bdevs": 4, 00:14:20.789 "num_base_bdevs_discovered": 3, 00:14:20.789 "num_base_bdevs_operational": 3, 00:14:20.789 "base_bdevs_list": [ 00:14:20.789 { 00:14:20.789 "name": null, 00:14:20.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.789 "is_configured": false, 00:14:20.789 "data_offset": 2048, 00:14:20.789 "data_size": 63488 00:14:20.789 }, 00:14:20.789 { 00:14:20.789 "name": "pt2", 00:14:20.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.789 "is_configured": true, 00:14:20.789 "data_offset": 2048, 00:14:20.789 "data_size": 63488 00:14:20.789 }, 00:14:20.789 { 00:14:20.789 "name": "pt3", 00:14:20.789 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.789 "is_configured": true, 00:14:20.789 "data_offset": 2048, 00:14:20.789 "data_size": 63488 00:14:20.789 }, 00:14:20.789 { 00:14:20.789 "name": "pt4", 00:14:20.789 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:20.789 "is_configured": true, 00:14:20.789 "data_offset": 2048, 00:14:20.789 "data_size": 63488 00:14:20.789 } 00:14:20.789 ] 00:14:20.789 }' 00:14:20.789 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.789 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.049 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:21.049 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.049 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.049 [2024-11-15 10:42:51.554760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.049 [2024-11-15 10:42:51.554795] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.049 [2024-11-15 10:42:51.554889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.049 [2024-11-15 10:42:51.555000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.049 [2024-11-15 10:42:51.555022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:21.049 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.049 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.049 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.049 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:21.049 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.049 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.311 [2024-11-15 10:42:51.622768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:21.311 [2024-11-15 10:42:51.622848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.311 [2024-11-15 10:42:51.622876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:21.311 [2024-11-15 10:42:51.622891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.311 [2024-11-15 10:42:51.625580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.311 [2024-11-15 10:42:51.625632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:21.311 [2024-11-15 10:42:51.625735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:21.311 [2024-11-15 10:42:51.625799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:21.311 [2024-11-15 10:42:51.625968] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:21.311 [2024-11-15 10:42:51.625993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.311 [2024-11-15 10:42:51.626014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:21.311 [2024-11-15 10:42:51.626090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:21.311 [2024-11-15 10:42:51.626255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:21.311 pt1 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.311 "name": "raid_bdev1", 00:14:21.311 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:21.311 "strip_size_kb": 0, 00:14:21.311 "state": "configuring", 00:14:21.311 "raid_level": "raid1", 00:14:21.311 "superblock": true, 00:14:21.311 "num_base_bdevs": 4, 00:14:21.311 "num_base_bdevs_discovered": 2, 00:14:21.311 "num_base_bdevs_operational": 3, 00:14:21.311 "base_bdevs_list": [ 00:14:21.311 { 00:14:21.311 "name": null, 00:14:21.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.311 "is_configured": false, 00:14:21.311 "data_offset": 2048, 00:14:21.311 "data_size": 63488 00:14:21.311 }, 00:14:21.311 { 00:14:21.311 "name": "pt2", 00:14:21.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.311 "is_configured": true, 00:14:21.311 "data_offset": 2048, 00:14:21.311 "data_size": 63488 00:14:21.311 }, 00:14:21.311 { 00:14:21.311 "name": "pt3", 00:14:21.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:21.311 "is_configured": true, 00:14:21.311 "data_offset": 2048, 00:14:21.311 "data_size": 63488 00:14:21.311 }, 00:14:21.311 { 00:14:21.311 "name": null, 00:14:21.311 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:21.311 "is_configured": false, 00:14:21.311 "data_offset": 2048, 00:14:21.311 "data_size": 63488 00:14:21.311 } 00:14:21.311 ] 00:14:21.311 }' 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.311 10:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 [2024-11-15 10:42:52.186970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:21.878 [2024-11-15 10:42:52.187061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.878 [2024-11-15 10:42:52.187095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:21.878 [2024-11-15 10:42:52.187110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.878 [2024-11-15 10:42:52.187660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.878 [2024-11-15 10:42:52.187699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:21.878 [2024-11-15 10:42:52.187805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:21.878 [2024-11-15 10:42:52.187846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:21.878 [2024-11-15 10:42:52.188016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:21.878 [2024-11-15 10:42:52.188032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:21.878 [2024-11-15 10:42:52.188365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:21.878 [2024-11-15 10:42:52.188577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:21.878 [2024-11-15 10:42:52.188605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:21.878 [2024-11-15 10:42:52.188781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.878 pt4 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.878 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.878 "name": "raid_bdev1", 00:14:21.878 "uuid": "6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc", 00:14:21.878 "strip_size_kb": 0, 00:14:21.878 "state": "online", 00:14:21.878 "raid_level": "raid1", 00:14:21.878 "superblock": true, 00:14:21.878 "num_base_bdevs": 4, 00:14:21.878 "num_base_bdevs_discovered": 3, 00:14:21.878 "num_base_bdevs_operational": 3, 00:14:21.878 "base_bdevs_list": [ 00:14:21.878 { 00:14:21.878 "name": null, 00:14:21.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.878 "is_configured": false, 00:14:21.878 "data_offset": 2048, 00:14:21.878 "data_size": 63488 00:14:21.878 }, 00:14:21.878 { 00:14:21.878 "name": "pt2", 00:14:21.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.878 "is_configured": true, 00:14:21.878 "data_offset": 2048, 00:14:21.878 "data_size": 63488 00:14:21.878 }, 00:14:21.878 { 00:14:21.878 "name": "pt3", 00:14:21.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:21.878 "is_configured": true, 00:14:21.878 "data_offset": 2048, 00:14:21.878 "data_size": 63488 00:14:21.878 }, 00:14:21.878 { 00:14:21.878 "name": "pt4", 00:14:21.878 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:21.879 "is_configured": true, 00:14:21.879 "data_offset": 2048, 00:14:21.879 "data_size": 63488 00:14:21.879 } 00:14:21.879 ] 00:14:21.879 }' 00:14:21.879 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.879 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.445 [2024-11-15 10:42:52.771481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc '!=' 6c7a8396-6f9d-4264-bb35-5f5f63fa7cbc ']' 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74853 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74853 ']' 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74853 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74853 00:14:22.445 killing process with pid 74853 00:14:22.445 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:22.446 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:22.446 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74853' 00:14:22.446 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74853 00:14:22.446 [2024-11-15 10:42:52.845262] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.446 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74853 00:14:22.446 [2024-11-15 10:42:52.845397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.446 [2024-11-15 10:42:52.845497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.446 [2024-11-15 10:42:52.845520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:22.705 [2024-11-15 10:42:53.178013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:23.639 10:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:23.639 00:14:23.639 real 0m9.275s 00:14:23.639 user 0m15.433s 00:14:23.639 sys 0m1.218s 00:14:23.639 10:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:23.639 ************************************ 00:14:23.639 END TEST raid_superblock_test 00:14:23.639 ************************************ 00:14:23.639 10:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.898 10:42:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:23.898 10:42:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:23.898 10:42:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:23.898 10:42:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:23.898 ************************************ 00:14:23.898 START TEST raid_read_error_test 00:14:23.898 ************************************ 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BNyybs0BYM 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75347 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75347 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75347 ']' 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:23.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:23.898 10:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.898 [2024-11-15 10:42:54.330656] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:14:23.898 [2024-11-15 10:42:54.330827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75347 ] 00:14:24.157 [2024-11-15 10:42:54.517555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.157 [2024-11-15 10:42:54.642460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.414 [2024-11-15 10:42:54.840892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.414 [2024-11-15 10:42:54.840947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.980 BaseBdev1_malloc 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.980 true 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.980 [2024-11-15 10:42:55.390646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:24.980 [2024-11-15 10:42:55.390722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.980 [2024-11-15 10:42:55.390755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:24.980 [2024-11-15 10:42:55.390774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.980 [2024-11-15 10:42:55.393405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.980 [2024-11-15 10:42:55.393465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:24.980 BaseBdev1 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.980 BaseBdev2_malloc 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.980 true 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.980 [2024-11-15 10:42:55.446317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:24.980 [2024-11-15 10:42:55.446398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.980 [2024-11-15 10:42:55.446425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:24.980 [2024-11-15 10:42:55.446442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.980 [2024-11-15 10:42:55.449025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.980 [2024-11-15 10:42:55.449078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:24.980 BaseBdev2 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:24.980 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.981 BaseBdev3_malloc 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.981 true 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.981 [2024-11-15 10:42:55.504806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:24.981 [2024-11-15 10:42:55.504890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.981 [2024-11-15 10:42:55.504929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:24.981 [2024-11-15 10:42:55.504949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.981 [2024-11-15 10:42:55.507676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.981 [2024-11-15 10:42:55.507729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:24.981 BaseBdev3 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.981 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 BaseBdev4_malloc 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 true 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 [2024-11-15 10:42:55.560613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:25.239 [2024-11-15 10:42:55.560683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.239 [2024-11-15 10:42:55.560711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:25.239 [2024-11-15 10:42:55.560729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.239 [2024-11-15 10:42:55.563298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.239 [2024-11-15 10:42:55.563366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:25.239 BaseBdev4 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 [2024-11-15 10:42:55.568690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.239 [2024-11-15 10:42:55.571045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.239 [2024-11-15 10:42:55.571180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.239 [2024-11-15 10:42:55.571296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.239 [2024-11-15 10:42:55.571645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:25.239 [2024-11-15 10:42:55.571670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:25.239 [2024-11-15 10:42:55.572002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:25.239 [2024-11-15 10:42:55.572230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:25.239 [2024-11-15 10:42:55.572247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:25.239 [2024-11-15 10:42:55.572491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.239 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.239 "name": "raid_bdev1", 00:14:25.239 "uuid": "53b5f86a-4fbb-402a-8ed5-a5e270ae34fd", 00:14:25.239 "strip_size_kb": 0, 00:14:25.239 "state": "online", 00:14:25.239 "raid_level": "raid1", 00:14:25.239 "superblock": true, 00:14:25.239 "num_base_bdevs": 4, 00:14:25.239 "num_base_bdevs_discovered": 4, 00:14:25.239 "num_base_bdevs_operational": 4, 00:14:25.239 "base_bdevs_list": [ 00:14:25.239 { 00:14:25.239 "name": "BaseBdev1", 00:14:25.239 "uuid": "bb6af969-4a3b-5538-b14c-89010d069532", 00:14:25.239 "is_configured": true, 00:14:25.239 "data_offset": 2048, 00:14:25.239 "data_size": 63488 00:14:25.239 }, 00:14:25.239 { 00:14:25.239 "name": "BaseBdev2", 00:14:25.239 "uuid": "f08d5753-5b48-5612-9b78-6cf8825bfa17", 00:14:25.239 "is_configured": true, 00:14:25.239 "data_offset": 2048, 00:14:25.239 "data_size": 63488 00:14:25.239 }, 00:14:25.239 { 00:14:25.239 "name": "BaseBdev3", 00:14:25.240 "uuid": "34a325d1-d1f9-5bae-8c00-9bedc026a29a", 00:14:25.240 "is_configured": true, 00:14:25.240 "data_offset": 2048, 00:14:25.240 "data_size": 63488 00:14:25.240 }, 00:14:25.240 { 00:14:25.240 "name": "BaseBdev4", 00:14:25.240 "uuid": "ea3d332c-6d8c-5b8f-99c4-11c3b5b20812", 00:14:25.240 "is_configured": true, 00:14:25.240 "data_offset": 2048, 00:14:25.240 "data_size": 63488 00:14:25.240 } 00:14:25.240 ] 00:14:25.240 }' 00:14:25.240 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.240 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.807 10:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:25.807 10:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:25.807 [2024-11-15 10:42:56.230164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:26.743 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:26.743 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.743 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.743 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.743 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:26.743 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:26.743 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:26.743 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:26.743 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.744 "name": "raid_bdev1", 00:14:26.744 "uuid": "53b5f86a-4fbb-402a-8ed5-a5e270ae34fd", 00:14:26.744 "strip_size_kb": 0, 00:14:26.744 "state": "online", 00:14:26.744 "raid_level": "raid1", 00:14:26.744 "superblock": true, 00:14:26.744 "num_base_bdevs": 4, 00:14:26.744 "num_base_bdevs_discovered": 4, 00:14:26.744 "num_base_bdevs_operational": 4, 00:14:26.744 "base_bdevs_list": [ 00:14:26.744 { 00:14:26.744 "name": "BaseBdev1", 00:14:26.744 "uuid": "bb6af969-4a3b-5538-b14c-89010d069532", 00:14:26.744 "is_configured": true, 00:14:26.744 "data_offset": 2048, 00:14:26.744 "data_size": 63488 00:14:26.744 }, 00:14:26.744 { 00:14:26.744 "name": "BaseBdev2", 00:14:26.744 "uuid": "f08d5753-5b48-5612-9b78-6cf8825bfa17", 00:14:26.744 "is_configured": true, 00:14:26.744 "data_offset": 2048, 00:14:26.744 "data_size": 63488 00:14:26.744 }, 00:14:26.744 { 00:14:26.744 "name": "BaseBdev3", 00:14:26.744 "uuid": "34a325d1-d1f9-5bae-8c00-9bedc026a29a", 00:14:26.744 "is_configured": true, 00:14:26.744 "data_offset": 2048, 00:14:26.744 "data_size": 63488 00:14:26.744 }, 00:14:26.744 { 00:14:26.744 "name": "BaseBdev4", 00:14:26.744 "uuid": "ea3d332c-6d8c-5b8f-99c4-11c3b5b20812", 00:14:26.744 "is_configured": true, 00:14:26.744 "data_offset": 2048, 00:14:26.744 "data_size": 63488 00:14:26.744 } 00:14:26.744 ] 00:14:26.744 }' 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.744 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.312 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.312 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.312 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.312 [2024-11-15 10:42:57.628953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.312 [2024-11-15 10:42:57.629000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.312 [2024-11-15 10:42:57.632580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.312 [2024-11-15 10:42:57.632661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.312 [2024-11-15 10:42:57.632815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.312 [2024-11-15 10:42:57.632835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:27.312 { 00:14:27.312 "results": [ 00:14:27.312 { 00:14:27.312 "job": "raid_bdev1", 00:14:27.312 "core_mask": "0x1", 00:14:27.312 "workload": "randrw", 00:14:27.312 "percentage": 50, 00:14:27.312 "status": "finished", 00:14:27.312 "queue_depth": 1, 00:14:27.312 "io_size": 131072, 00:14:27.312 "runtime": 1.396685, 00:14:27.312 "iops": 8591.05668064023, 00:14:27.312 "mibps": 1073.8820850800287, 00:14:27.312 "io_failed": 0, 00:14:27.312 "io_timeout": 0, 00:14:27.312 "avg_latency_us": 111.60907772617415, 00:14:27.312 "min_latency_us": 45.14909090909091, 00:14:27.312 "max_latency_us": 1876.7127272727273 00:14:27.312 } 00:14:27.312 ], 00:14:27.312 "core_count": 1 00:14:27.312 } 00:14:27.312 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.312 10:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75347 00:14:27.312 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75347 ']' 00:14:27.312 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75347 00:14:27.312 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:14:27.312 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:27.313 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75347 00:14:27.313 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:27.313 killing process with pid 75347 00:14:27.313 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:27.313 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75347' 00:14:27.313 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75347 00:14:27.313 [2024-11-15 10:42:57.665144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.313 10:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75347 00:14:27.572 [2024-11-15 10:42:57.937913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.526 10:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BNyybs0BYM 00:14:28.526 10:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:28.526 10:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:28.526 10:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:28.526 10:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:28.526 10:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:28.526 10:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:28.526 10:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:28.526 00:14:28.526 real 0m4.772s 00:14:28.526 user 0m6.025s 00:14:28.526 sys 0m0.515s 00:14:28.526 10:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:28.526 10:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.526 ************************************ 00:14:28.526 END TEST raid_read_error_test 00:14:28.526 ************************************ 00:14:28.526 10:42:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:14:28.526 10:42:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:28.526 10:42:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:28.527 10:42:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.527 ************************************ 00:14:28.527 START TEST raid_write_error_test 00:14:28.527 ************************************ 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oUYtGZRF7y 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75493 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75493 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75493 ']' 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:28.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:28.527 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.786 [2024-11-15 10:42:59.148771] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:14:28.786 [2024-11-15 10:42:59.148946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75493 ] 00:14:28.786 [2024-11-15 10:42:59.329283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.044 [2024-11-15 10:42:59.432358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.303 [2024-11-15 10:42:59.612030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.303 [2024-11-15 10:42:59.612077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 BaseBdev1_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 true 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 [2024-11-15 10:43:00.232490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:29.871 [2024-11-15 10:43:00.232562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.871 [2024-11-15 10:43:00.232591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:29.871 [2024-11-15 10:43:00.232607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.871 [2024-11-15 10:43:00.235180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.871 [2024-11-15 10:43:00.235230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.871 BaseBdev1 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 BaseBdev2_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 true 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 [2024-11-15 10:43:00.283636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:29.871 [2024-11-15 10:43:00.283703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.871 [2024-11-15 10:43:00.283729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:29.871 [2024-11-15 10:43:00.283745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.871 [2024-11-15 10:43:00.286298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.871 [2024-11-15 10:43:00.286364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.871 BaseBdev2 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 BaseBdev3_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 true 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 [2024-11-15 10:43:00.349239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:29.871 [2024-11-15 10:43:00.349306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.871 [2024-11-15 10:43:00.349333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:29.871 [2024-11-15 10:43:00.349371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.871 [2024-11-15 10:43:00.351975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.871 [2024-11-15 10:43:00.352026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:29.871 BaseBdev3 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 BaseBdev4_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 true 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.871 [2024-11-15 10:43:00.400520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:29.871 [2024-11-15 10:43:00.400587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.871 [2024-11-15 10:43:00.400613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:29.871 [2024-11-15 10:43:00.400630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.871 [2024-11-15 10:43:00.403170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.871 [2024-11-15 10:43:00.403228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:29.871 BaseBdev4 00:14:29.871 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.872 [2024-11-15 10:43:00.408588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.872 [2024-11-15 10:43:00.410821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.872 [2024-11-15 10:43:00.410931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.872 [2024-11-15 10:43:00.411037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:29.872 [2024-11-15 10:43:00.411374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:29.872 [2024-11-15 10:43:00.411397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:29.872 [2024-11-15 10:43:00.411706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:29.872 [2024-11-15 10:43:00.411929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:29.872 [2024-11-15 10:43:00.411945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:29.872 [2024-11-15 10:43:00.412131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.872 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.130 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.130 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.130 "name": "raid_bdev1", 00:14:30.130 "uuid": "1ab86977-c064-4285-bd76-a71adc517fee", 00:14:30.130 "strip_size_kb": 0, 00:14:30.130 "state": "online", 00:14:30.130 "raid_level": "raid1", 00:14:30.130 "superblock": true, 00:14:30.130 "num_base_bdevs": 4, 00:14:30.130 "num_base_bdevs_discovered": 4, 00:14:30.130 "num_base_bdevs_operational": 4, 00:14:30.130 "base_bdevs_list": [ 00:14:30.130 { 00:14:30.130 "name": "BaseBdev1", 00:14:30.130 "uuid": "06fba135-da2a-5be9-9423-a6659aa6458b", 00:14:30.130 "is_configured": true, 00:14:30.130 "data_offset": 2048, 00:14:30.130 "data_size": 63488 00:14:30.130 }, 00:14:30.130 { 00:14:30.130 "name": "BaseBdev2", 00:14:30.130 "uuid": "07fbdc87-2253-53cf-8122-2f1475063d21", 00:14:30.130 "is_configured": true, 00:14:30.130 "data_offset": 2048, 00:14:30.130 "data_size": 63488 00:14:30.130 }, 00:14:30.130 { 00:14:30.130 "name": "BaseBdev3", 00:14:30.130 "uuid": "a47700a8-0298-58ab-bdb6-73141a9174d8", 00:14:30.130 "is_configured": true, 00:14:30.130 "data_offset": 2048, 00:14:30.130 "data_size": 63488 00:14:30.130 }, 00:14:30.130 { 00:14:30.131 "name": "BaseBdev4", 00:14:30.131 "uuid": "a63ec35f-df94-53e0-9c0c-a0b76d5f8116", 00:14:30.131 "is_configured": true, 00:14:30.131 "data_offset": 2048, 00:14:30.131 "data_size": 63488 00:14:30.131 } 00:14:30.131 ] 00:14:30.131 }' 00:14:30.131 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.131 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.698 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:30.698 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:30.698 [2024-11-15 10:43:01.094061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:31.633 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:31.633 10:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.633 10:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.633 [2024-11-15 10:43:01.971229] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:31.633 [2024-11-15 10:43:01.971295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:31.633 [2024-11-15 10:43:01.971578] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:14:31.633 10:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.633 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:31.633 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:31.633 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.634 10:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.634 10:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.634 "name": "raid_bdev1", 00:14:31.634 "uuid": "1ab86977-c064-4285-bd76-a71adc517fee", 00:14:31.634 "strip_size_kb": 0, 00:14:31.634 "state": "online", 00:14:31.634 "raid_level": "raid1", 00:14:31.634 "superblock": true, 00:14:31.634 "num_base_bdevs": 4, 00:14:31.634 "num_base_bdevs_discovered": 3, 00:14:31.634 "num_base_bdevs_operational": 3, 00:14:31.634 "base_bdevs_list": [ 00:14:31.634 { 00:14:31.634 "name": null, 00:14:31.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.634 "is_configured": false, 00:14:31.634 "data_offset": 0, 00:14:31.634 "data_size": 63488 00:14:31.634 }, 00:14:31.634 { 00:14:31.634 "name": "BaseBdev2", 00:14:31.634 "uuid": "07fbdc87-2253-53cf-8122-2f1475063d21", 00:14:31.634 "is_configured": true, 00:14:31.634 "data_offset": 2048, 00:14:31.634 "data_size": 63488 00:14:31.634 }, 00:14:31.634 { 00:14:31.634 "name": "BaseBdev3", 00:14:31.634 "uuid": "a47700a8-0298-58ab-bdb6-73141a9174d8", 00:14:31.634 "is_configured": true, 00:14:31.634 "data_offset": 2048, 00:14:31.634 "data_size": 63488 00:14:31.634 }, 00:14:31.634 { 00:14:31.634 "name": "BaseBdev4", 00:14:31.634 "uuid": "a63ec35f-df94-53e0-9c0c-a0b76d5f8116", 00:14:31.634 "is_configured": true, 00:14:31.634 "data_offset": 2048, 00:14:31.634 "data_size": 63488 00:14:31.634 } 00:14:31.634 ] 00:14:31.634 }' 00:14:31.634 10:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.634 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.201 [2024-11-15 10:43:02.514029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.201 [2024-11-15 10:43:02.514068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.201 [2024-11-15 10:43:02.517542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.201 [2024-11-15 10:43:02.517604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.201 [2024-11-15 10:43:02.517731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.201 [2024-11-15 10:43:02.517750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:32.201 { 00:14:32.201 "results": [ 00:14:32.201 { 00:14:32.201 "job": "raid_bdev1", 00:14:32.201 "core_mask": "0x1", 00:14:32.201 "workload": "randrw", 00:14:32.201 "percentage": 50, 00:14:32.201 "status": "finished", 00:14:32.201 "queue_depth": 1, 00:14:32.201 "io_size": 131072, 00:14:32.201 "runtime": 1.41785, 00:14:32.201 "iops": 9697.076559579646, 00:14:32.201 "mibps": 1212.1345699474557, 00:14:32.201 "io_failed": 0, 00:14:32.201 "io_timeout": 0, 00:14:32.201 "avg_latency_us": 98.65584829309898, 00:14:32.201 "min_latency_us": 42.589090909090906, 00:14:32.201 "max_latency_us": 1876.7127272727273 00:14:32.201 } 00:14:32.201 ], 00:14:32.201 "core_count": 1 00:14:32.201 } 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75493 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75493 ']' 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75493 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75493 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:32.201 killing process with pid 75493 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75493' 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75493 00:14:32.201 [2024-11-15 10:43:02.553054] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.201 10:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75493 00:14:32.507 [2024-11-15 10:43:02.824603] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.442 10:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oUYtGZRF7y 00:14:33.442 10:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:33.442 10:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:33.442 10:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:33.442 10:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:33.442 10:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:33.442 10:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:33.442 10:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:33.442 00:14:33.442 real 0m4.818s 00:14:33.442 user 0m6.128s 00:14:33.442 sys 0m0.507s 00:14:33.442 10:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:33.442 10:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.443 ************************************ 00:14:33.443 END TEST raid_write_error_test 00:14:33.443 ************************************ 00:14:33.443 10:43:03 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:33.443 10:43:03 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:33.443 10:43:03 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:33.443 10:43:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:33.443 10:43:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:33.443 10:43:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.443 ************************************ 00:14:33.443 START TEST raid_rebuild_test 00:14:33.443 ************************************ 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75631 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75631 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75631 ']' 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:33.443 10:43:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.443 [2024-11-15 10:43:03.999834] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:14:33.701 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:33.701 Zero copy mechanism will not be used. 00:14:33.701 [2024-11-15 10:43:04.000118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75631 ] 00:14:33.701 [2024-11-15 10:43:04.172889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.959 [2024-11-15 10:43:04.290658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.959 [2024-11-15 10:43:04.469474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.959 [2024-11-15 10:43:04.469529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.526 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:34.526 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:14:34.526 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.526 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.527 BaseBdev1_malloc 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.527 [2024-11-15 10:43:05.075571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:34.527 [2024-11-15 10:43:05.075641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.527 [2024-11-15 10:43:05.075672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:34.527 [2024-11-15 10:43:05.075689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.527 [2024-11-15 10:43:05.078268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.527 [2024-11-15 10:43:05.078315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.527 BaseBdev1 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.527 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.786 BaseBdev2_malloc 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.786 [2024-11-15 10:43:05.119008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:34.786 [2024-11-15 10:43:05.119078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.786 [2024-11-15 10:43:05.119110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:34.786 [2024-11-15 10:43:05.119128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.786 [2024-11-15 10:43:05.121652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.786 [2024-11-15 10:43:05.121696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.786 BaseBdev2 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.786 spare_malloc 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.786 spare_delay 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.786 [2024-11-15 10:43:05.183129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:34.786 [2024-11-15 10:43:05.183205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.786 [2024-11-15 10:43:05.183237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:34.786 [2024-11-15 10:43:05.183255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.786 [2024-11-15 10:43:05.185843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.786 [2024-11-15 10:43:05.185889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:34.786 spare 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.786 [2024-11-15 10:43:05.191186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.786 [2024-11-15 10:43:05.193409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.786 [2024-11-15 10:43:05.193536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:34.786 [2024-11-15 10:43:05.193558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:34.786 [2024-11-15 10:43:05.193894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:34.786 [2024-11-15 10:43:05.194109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:34.786 [2024-11-15 10:43:05.194136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:34.786 [2024-11-15 10:43:05.194323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.786 "name": "raid_bdev1", 00:14:34.786 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:34.786 "strip_size_kb": 0, 00:14:34.786 "state": "online", 00:14:34.786 "raid_level": "raid1", 00:14:34.786 "superblock": false, 00:14:34.786 "num_base_bdevs": 2, 00:14:34.786 "num_base_bdevs_discovered": 2, 00:14:34.786 "num_base_bdevs_operational": 2, 00:14:34.786 "base_bdevs_list": [ 00:14:34.786 { 00:14:34.786 "name": "BaseBdev1", 00:14:34.786 "uuid": "7b1ddd56-4a1b-53a1-8f9f-cfb612a75143", 00:14:34.786 "is_configured": true, 00:14:34.786 "data_offset": 0, 00:14:34.786 "data_size": 65536 00:14:34.786 }, 00:14:34.786 { 00:14:34.786 "name": "BaseBdev2", 00:14:34.786 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:34.786 "is_configured": true, 00:14:34.786 "data_offset": 0, 00:14:34.786 "data_size": 65536 00:14:34.786 } 00:14:34.786 ] 00:14:34.786 }' 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.786 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.353 [2024-11-15 10:43:05.699679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.353 10:43:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:35.612 [2024-11-15 10:43:06.043505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:35.612 /dev/nbd0 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.612 1+0 records in 00:14:35.612 1+0 records out 00:14:35.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364752 s, 11.2 MB/s 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:35.612 10:43:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:42.176 65536+0 records in 00:14:42.176 65536+0 records out 00:14:42.176 33554432 bytes (34 MB, 32 MiB) copied, 6.19998 s, 5.4 MB/s 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:42.176 [2024-11-15 10:43:12.591893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.176 10:43:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.176 [2024-11-15 10:43:12.627982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.177 "name": "raid_bdev1", 00:14:42.177 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:42.177 "strip_size_kb": 0, 00:14:42.177 "state": "online", 00:14:42.177 "raid_level": "raid1", 00:14:42.177 "superblock": false, 00:14:42.177 "num_base_bdevs": 2, 00:14:42.177 "num_base_bdevs_discovered": 1, 00:14:42.177 "num_base_bdevs_operational": 1, 00:14:42.177 "base_bdevs_list": [ 00:14:42.177 { 00:14:42.177 "name": null, 00:14:42.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.177 "is_configured": false, 00:14:42.177 "data_offset": 0, 00:14:42.177 "data_size": 65536 00:14:42.177 }, 00:14:42.177 { 00:14:42.177 "name": "BaseBdev2", 00:14:42.177 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:42.177 "is_configured": true, 00:14:42.177 "data_offset": 0, 00:14:42.177 "data_size": 65536 00:14:42.177 } 00:14:42.177 ] 00:14:42.177 }' 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.177 10:43:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.743 10:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:42.743 10:43:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.743 10:43:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.743 [2024-11-15 10:43:13.132141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.743 [2024-11-15 10:43:13.147284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:42.743 10:43:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.743 10:43:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:42.743 [2024-11-15 10:43:13.149518] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.679 "name": "raid_bdev1", 00:14:43.679 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:43.679 "strip_size_kb": 0, 00:14:43.679 "state": "online", 00:14:43.679 "raid_level": "raid1", 00:14:43.679 "superblock": false, 00:14:43.679 "num_base_bdevs": 2, 00:14:43.679 "num_base_bdevs_discovered": 2, 00:14:43.679 "num_base_bdevs_operational": 2, 00:14:43.679 "process": { 00:14:43.679 "type": "rebuild", 00:14:43.679 "target": "spare", 00:14:43.679 "progress": { 00:14:43.679 "blocks": 20480, 00:14:43.679 "percent": 31 00:14:43.679 } 00:14:43.679 }, 00:14:43.679 "base_bdevs_list": [ 00:14:43.679 { 00:14:43.679 "name": "spare", 00:14:43.679 "uuid": "e893dbcd-c036-5fb9-bc16-86d201b54e38", 00:14:43.679 "is_configured": true, 00:14:43.679 "data_offset": 0, 00:14:43.679 "data_size": 65536 00:14:43.679 }, 00:14:43.679 { 00:14:43.679 "name": "BaseBdev2", 00:14:43.679 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:43.679 "is_configured": true, 00:14:43.679 "data_offset": 0, 00:14:43.679 "data_size": 65536 00:14:43.679 } 00:14:43.679 ] 00:14:43.679 }' 00:14:43.679 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.943 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.943 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.943 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.943 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:43.943 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.943 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.943 [2024-11-15 10:43:14.335159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.943 [2024-11-15 10:43:14.356327] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:43.943 [2024-11-15 10:43:14.356432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.943 [2024-11-15 10:43:14.356463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.943 [2024-11-15 10:43:14.356495] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:43.943 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.943 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:43.943 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.944 "name": "raid_bdev1", 00:14:43.944 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:43.944 "strip_size_kb": 0, 00:14:43.944 "state": "online", 00:14:43.944 "raid_level": "raid1", 00:14:43.944 "superblock": false, 00:14:43.944 "num_base_bdevs": 2, 00:14:43.944 "num_base_bdevs_discovered": 1, 00:14:43.944 "num_base_bdevs_operational": 1, 00:14:43.944 "base_bdevs_list": [ 00:14:43.944 { 00:14:43.944 "name": null, 00:14:43.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.944 "is_configured": false, 00:14:43.944 "data_offset": 0, 00:14:43.944 "data_size": 65536 00:14:43.944 }, 00:14:43.944 { 00:14:43.944 "name": "BaseBdev2", 00:14:43.944 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:43.944 "is_configured": true, 00:14:43.944 "data_offset": 0, 00:14:43.944 "data_size": 65536 00:14:43.944 } 00:14:43.944 ] 00:14:43.944 }' 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.944 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.510 "name": "raid_bdev1", 00:14:44.510 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:44.510 "strip_size_kb": 0, 00:14:44.510 "state": "online", 00:14:44.510 "raid_level": "raid1", 00:14:44.510 "superblock": false, 00:14:44.510 "num_base_bdevs": 2, 00:14:44.510 "num_base_bdevs_discovered": 1, 00:14:44.510 "num_base_bdevs_operational": 1, 00:14:44.510 "base_bdevs_list": [ 00:14:44.510 { 00:14:44.510 "name": null, 00:14:44.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.510 "is_configured": false, 00:14:44.510 "data_offset": 0, 00:14:44.510 "data_size": 65536 00:14:44.510 }, 00:14:44.510 { 00:14:44.510 "name": "BaseBdev2", 00:14:44.510 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:44.510 "is_configured": true, 00:14:44.510 "data_offset": 0, 00:14:44.510 "data_size": 65536 00:14:44.510 } 00:14:44.510 ] 00:14:44.510 }' 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.510 10:43:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.510 10:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.510 10:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:44.510 10:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.510 10:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.510 [2024-11-15 10:43:15.033192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.510 [2024-11-15 10:43:15.047078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:44.510 10:43:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.510 10:43:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:44.510 [2024-11-15 10:43:15.049369] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.887 "name": "raid_bdev1", 00:14:45.887 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:45.887 "strip_size_kb": 0, 00:14:45.887 "state": "online", 00:14:45.887 "raid_level": "raid1", 00:14:45.887 "superblock": false, 00:14:45.887 "num_base_bdevs": 2, 00:14:45.887 "num_base_bdevs_discovered": 2, 00:14:45.887 "num_base_bdevs_operational": 2, 00:14:45.887 "process": { 00:14:45.887 "type": "rebuild", 00:14:45.887 "target": "spare", 00:14:45.887 "progress": { 00:14:45.887 "blocks": 20480, 00:14:45.887 "percent": 31 00:14:45.887 } 00:14:45.887 }, 00:14:45.887 "base_bdevs_list": [ 00:14:45.887 { 00:14:45.887 "name": "spare", 00:14:45.887 "uuid": "e893dbcd-c036-5fb9-bc16-86d201b54e38", 00:14:45.887 "is_configured": true, 00:14:45.887 "data_offset": 0, 00:14:45.887 "data_size": 65536 00:14:45.887 }, 00:14:45.887 { 00:14:45.887 "name": "BaseBdev2", 00:14:45.887 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:45.887 "is_configured": true, 00:14:45.887 "data_offset": 0, 00:14:45.887 "data_size": 65536 00:14:45.887 } 00:14:45.887 ] 00:14:45.887 }' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=390 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.887 "name": "raid_bdev1", 00:14:45.887 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:45.887 "strip_size_kb": 0, 00:14:45.887 "state": "online", 00:14:45.887 "raid_level": "raid1", 00:14:45.887 "superblock": false, 00:14:45.887 "num_base_bdevs": 2, 00:14:45.887 "num_base_bdevs_discovered": 2, 00:14:45.887 "num_base_bdevs_operational": 2, 00:14:45.887 "process": { 00:14:45.887 "type": "rebuild", 00:14:45.887 "target": "spare", 00:14:45.887 "progress": { 00:14:45.887 "blocks": 22528, 00:14:45.887 "percent": 34 00:14:45.887 } 00:14:45.887 }, 00:14:45.887 "base_bdevs_list": [ 00:14:45.887 { 00:14:45.887 "name": "spare", 00:14:45.887 "uuid": "e893dbcd-c036-5fb9-bc16-86d201b54e38", 00:14:45.887 "is_configured": true, 00:14:45.887 "data_offset": 0, 00:14:45.887 "data_size": 65536 00:14:45.887 }, 00:14:45.887 { 00:14:45.887 "name": "BaseBdev2", 00:14:45.887 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:45.887 "is_configured": true, 00:14:45.887 "data_offset": 0, 00:14:45.887 "data_size": 65536 00:14:45.887 } 00:14:45.887 ] 00:14:45.887 }' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.887 10:43:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.263 "name": "raid_bdev1", 00:14:47.263 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:47.263 "strip_size_kb": 0, 00:14:47.263 "state": "online", 00:14:47.263 "raid_level": "raid1", 00:14:47.263 "superblock": false, 00:14:47.263 "num_base_bdevs": 2, 00:14:47.263 "num_base_bdevs_discovered": 2, 00:14:47.263 "num_base_bdevs_operational": 2, 00:14:47.263 "process": { 00:14:47.263 "type": "rebuild", 00:14:47.263 "target": "spare", 00:14:47.263 "progress": { 00:14:47.263 "blocks": 47104, 00:14:47.263 "percent": 71 00:14:47.263 } 00:14:47.263 }, 00:14:47.263 "base_bdevs_list": [ 00:14:47.263 { 00:14:47.263 "name": "spare", 00:14:47.263 "uuid": "e893dbcd-c036-5fb9-bc16-86d201b54e38", 00:14:47.263 "is_configured": true, 00:14:47.263 "data_offset": 0, 00:14:47.263 "data_size": 65536 00:14:47.263 }, 00:14:47.263 { 00:14:47.263 "name": "BaseBdev2", 00:14:47.263 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:47.263 "is_configured": true, 00:14:47.263 "data_offset": 0, 00:14:47.263 "data_size": 65536 00:14:47.263 } 00:14:47.263 ] 00:14:47.263 }' 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.263 10:43:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.830 [2024-11-15 10:43:18.266176] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:47.830 [2024-11-15 10:43:18.266292] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:47.830 [2024-11-15 10:43:18.266368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.088 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.088 "name": "raid_bdev1", 00:14:48.088 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:48.088 "strip_size_kb": 0, 00:14:48.088 "state": "online", 00:14:48.088 "raid_level": "raid1", 00:14:48.088 "superblock": false, 00:14:48.088 "num_base_bdevs": 2, 00:14:48.088 "num_base_bdevs_discovered": 2, 00:14:48.088 "num_base_bdevs_operational": 2, 00:14:48.088 "base_bdevs_list": [ 00:14:48.088 { 00:14:48.088 "name": "spare", 00:14:48.089 "uuid": "e893dbcd-c036-5fb9-bc16-86d201b54e38", 00:14:48.089 "is_configured": true, 00:14:48.089 "data_offset": 0, 00:14:48.089 "data_size": 65536 00:14:48.089 }, 00:14:48.089 { 00:14:48.089 "name": "BaseBdev2", 00:14:48.089 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:48.089 "is_configured": true, 00:14:48.089 "data_offset": 0, 00:14:48.089 "data_size": 65536 00:14:48.089 } 00:14:48.089 ] 00:14:48.089 }' 00:14:48.089 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.347 "name": "raid_bdev1", 00:14:48.347 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:48.347 "strip_size_kb": 0, 00:14:48.347 "state": "online", 00:14:48.347 "raid_level": "raid1", 00:14:48.347 "superblock": false, 00:14:48.347 "num_base_bdevs": 2, 00:14:48.347 "num_base_bdevs_discovered": 2, 00:14:48.347 "num_base_bdevs_operational": 2, 00:14:48.347 "base_bdevs_list": [ 00:14:48.347 { 00:14:48.347 "name": "spare", 00:14:48.347 "uuid": "e893dbcd-c036-5fb9-bc16-86d201b54e38", 00:14:48.347 "is_configured": true, 00:14:48.347 "data_offset": 0, 00:14:48.347 "data_size": 65536 00:14:48.347 }, 00:14:48.347 { 00:14:48.347 "name": "BaseBdev2", 00:14:48.347 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:48.347 "is_configured": true, 00:14:48.347 "data_offset": 0, 00:14:48.347 "data_size": 65536 00:14:48.347 } 00:14:48.347 ] 00:14:48.347 }' 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.347 "name": "raid_bdev1", 00:14:48.347 "uuid": "9be0e6b4-fedf-478e-aba0-f678741362a7", 00:14:48.347 "strip_size_kb": 0, 00:14:48.347 "state": "online", 00:14:48.347 "raid_level": "raid1", 00:14:48.347 "superblock": false, 00:14:48.347 "num_base_bdevs": 2, 00:14:48.347 "num_base_bdevs_discovered": 2, 00:14:48.347 "num_base_bdevs_operational": 2, 00:14:48.347 "base_bdevs_list": [ 00:14:48.347 { 00:14:48.347 "name": "spare", 00:14:48.347 "uuid": "e893dbcd-c036-5fb9-bc16-86d201b54e38", 00:14:48.347 "is_configured": true, 00:14:48.347 "data_offset": 0, 00:14:48.347 "data_size": 65536 00:14:48.347 }, 00:14:48.347 { 00:14:48.347 "name": "BaseBdev2", 00:14:48.347 "uuid": "37659069-004f-5ecb-aa54-2d8de11f26bb", 00:14:48.347 "is_configured": true, 00:14:48.347 "data_offset": 0, 00:14:48.347 "data_size": 65536 00:14:48.347 } 00:14:48.347 ] 00:14:48.347 }' 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.347 10:43:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.914 [2024-11-15 10:43:19.332569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.914 [2024-11-15 10:43:19.332610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.914 [2024-11-15 10:43:19.332704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.914 [2024-11-15 10:43:19.332794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.914 [2024-11-15 10:43:19.332820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.914 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:49.172 /dev/nbd0 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:49.172 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.430 1+0 records in 00:14:49.430 1+0 records out 00:14:49.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361161 s, 11.3 MB/s 00:14:49.430 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.430 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:49.430 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.430 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:49.430 10:43:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:49.430 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.430 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:49.430 10:43:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:49.688 /dev/nbd1 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.688 1+0 records in 00:14:49.688 1+0 records out 00:14:49.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410228 s, 10.0 MB/s 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:49.688 10:43:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:49.947 10:43:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:49.947 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:49.947 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:49.947 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:49.947 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:49.947 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:49.947 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:50.206 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:50.206 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:50.206 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:50.206 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.206 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.206 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:50.206 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:50.206 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.206 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.206 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75631 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75631 ']' 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75631 00:14:50.479 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:14:50.480 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:50.480 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75631 00:14:50.480 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:50.480 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:50.480 killing process with pid 75631 00:14:50.480 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75631' 00:14:50.480 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75631 00:14:50.480 Received shutdown signal, test time was about 60.000000 seconds 00:14:50.480 00:14:50.480 Latency(us) 00:14:50.480 [2024-11-15T10:43:21.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.480 [2024-11-15T10:43:21.040Z] =================================================================================================================== 00:14:50.480 [2024-11-15T10:43:21.040Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:50.480 [2024-11-15 10:43:20.915458] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.480 10:43:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75631 00:14:50.750 [2024-11-15 10:43:21.168917] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.686 10:43:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:51.686 00:14:51.686 real 0m18.263s 00:14:51.686 user 0m20.922s 00:14:51.686 sys 0m3.201s 00:14:51.686 10:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:51.686 10:43:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.686 ************************************ 00:14:51.686 END TEST raid_rebuild_test 00:14:51.686 ************************************ 00:14:51.686 10:43:22 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:51.686 10:43:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:51.686 10:43:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:51.686 10:43:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.686 ************************************ 00:14:51.686 START TEST raid_rebuild_test_sb 00:14:51.686 ************************************ 00:14:51.686 10:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:14:51.686 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:51.686 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:51.686 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:51.686 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:51.686 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:51.686 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76084 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76084 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 76084 ']' 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:51.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:51.687 10:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.946 [2024-11-15 10:43:22.308442] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:14:51.946 [2024-11-15 10:43:22.308585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76084 ] 00:14:51.946 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:51.946 Zero copy mechanism will not be used. 00:14:51.946 [2024-11-15 10:43:22.486247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.204 [2024-11-15 10:43:22.609499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.462 [2024-11-15 10:43:22.822991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.462 [2024-11-15 10:43:22.823066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 BaseBdev1_malloc 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 [2024-11-15 10:43:23.373798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:53.027 [2024-11-15 10:43:23.373866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.027 [2024-11-15 10:43:23.373896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:53.027 [2024-11-15 10:43:23.373914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.027 [2024-11-15 10:43:23.376522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.027 [2024-11-15 10:43:23.376570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:53.027 BaseBdev1 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 BaseBdev2_malloc 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 [2024-11-15 10:43:23.417550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:53.027 [2024-11-15 10:43:23.417622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.027 [2024-11-15 10:43:23.417653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:53.027 [2024-11-15 10:43:23.417670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.027 [2024-11-15 10:43:23.420280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.027 [2024-11-15 10:43:23.420340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:53.027 BaseBdev2 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 spare_malloc 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 spare_delay 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 [2024-11-15 10:43:23.489994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:53.027 [2024-11-15 10:43:23.490084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.027 [2024-11-15 10:43:23.490118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:53.027 [2024-11-15 10:43:23.490138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.027 [2024-11-15 10:43:23.493414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.027 [2024-11-15 10:43:23.493468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:53.027 spare 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.027 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 [2024-11-15 10:43:23.498280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.027 [2024-11-15 10:43:23.501177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.028 [2024-11-15 10:43:23.501507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:53.028 [2024-11-15 10:43:23.501565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:53.028 [2024-11-15 10:43:23.502073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:53.028 [2024-11-15 10:43:23.502451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:53.028 [2024-11-15 10:43:23.502500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:53.028 [2024-11-15 10:43:23.502905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.028 "name": "raid_bdev1", 00:14:53.028 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:14:53.028 "strip_size_kb": 0, 00:14:53.028 "state": "online", 00:14:53.028 "raid_level": "raid1", 00:14:53.028 "superblock": true, 00:14:53.028 "num_base_bdevs": 2, 00:14:53.028 "num_base_bdevs_discovered": 2, 00:14:53.028 "num_base_bdevs_operational": 2, 00:14:53.028 "base_bdevs_list": [ 00:14:53.028 { 00:14:53.028 "name": "BaseBdev1", 00:14:53.028 "uuid": "cd9205df-b0e9-5e8e-8385-529bf839340c", 00:14:53.028 "is_configured": true, 00:14:53.028 "data_offset": 2048, 00:14:53.028 "data_size": 63488 00:14:53.028 }, 00:14:53.028 { 00:14:53.028 "name": "BaseBdev2", 00:14:53.028 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:14:53.028 "is_configured": true, 00:14:53.028 "data_offset": 2048, 00:14:53.028 "data_size": 63488 00:14:53.028 } 00:14:53.028 ] 00:14:53.028 }' 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.028 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.593 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:53.593 10:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.593 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.593 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.593 [2024-11-15 10:43:23.975221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.593 10:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.593 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:53.852 [2024-11-15 10:43:24.359022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:53.852 /dev/nbd0 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:53.852 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.852 1+0 records in 00:14:53.852 1+0 records out 00:14:53.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313863 s, 13.1 MB/s 00:14:54.110 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.110 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:54.110 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.110 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:54.110 10:43:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:54.110 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.110 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.110 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:54.110 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:54.110 10:43:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:00.711 63488+0 records in 00:15:00.711 63488+0 records out 00:15:00.711 32505856 bytes (33 MB, 31 MiB) copied, 6.29002 s, 5.2 MB/s 00:15:00.711 10:43:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:00.711 10:43:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.711 10:43:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:00.711 10:43:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.711 10:43:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:00.711 10:43:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.711 10:43:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:00.711 [2024-11-15 10:43:31.016255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.711 [2024-11-15 10:43:31.048717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.711 "name": "raid_bdev1", 00:15:00.711 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:00.711 "strip_size_kb": 0, 00:15:00.711 "state": "online", 00:15:00.711 "raid_level": "raid1", 00:15:00.711 "superblock": true, 00:15:00.711 "num_base_bdevs": 2, 00:15:00.711 "num_base_bdevs_discovered": 1, 00:15:00.711 "num_base_bdevs_operational": 1, 00:15:00.711 "base_bdevs_list": [ 00:15:00.711 { 00:15:00.711 "name": null, 00:15:00.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.711 "is_configured": false, 00:15:00.711 "data_offset": 0, 00:15:00.711 "data_size": 63488 00:15:00.711 }, 00:15:00.711 { 00:15:00.711 "name": "BaseBdev2", 00:15:00.711 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:00.711 "is_configured": true, 00:15:00.711 "data_offset": 2048, 00:15:00.711 "data_size": 63488 00:15:00.711 } 00:15:00.711 ] 00:15:00.711 }' 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.711 10:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.278 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.278 10:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.278 10:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.278 [2024-11-15 10:43:31.552855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.278 [2024-11-15 10:43:31.568089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:01.278 10:43:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.278 10:43:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:01.278 [2024-11-15 10:43:31.570432] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.215 "name": "raid_bdev1", 00:15:02.215 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:02.215 "strip_size_kb": 0, 00:15:02.215 "state": "online", 00:15:02.215 "raid_level": "raid1", 00:15:02.215 "superblock": true, 00:15:02.215 "num_base_bdevs": 2, 00:15:02.215 "num_base_bdevs_discovered": 2, 00:15:02.215 "num_base_bdevs_operational": 2, 00:15:02.215 "process": { 00:15:02.215 "type": "rebuild", 00:15:02.215 "target": "spare", 00:15:02.215 "progress": { 00:15:02.215 "blocks": 20480, 00:15:02.215 "percent": 32 00:15:02.215 } 00:15:02.215 }, 00:15:02.215 "base_bdevs_list": [ 00:15:02.215 { 00:15:02.215 "name": "spare", 00:15:02.215 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:02.215 "is_configured": true, 00:15:02.215 "data_offset": 2048, 00:15:02.215 "data_size": 63488 00:15:02.215 }, 00:15:02.215 { 00:15:02.215 "name": "BaseBdev2", 00:15:02.215 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:02.215 "is_configured": true, 00:15:02.215 "data_offset": 2048, 00:15:02.215 "data_size": 63488 00:15:02.215 } 00:15:02.215 ] 00:15:02.215 }' 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.215 10:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.215 [2024-11-15 10:43:32.731856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.473 [2024-11-15 10:43:32.777289] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.473 [2024-11-15 10:43:32.777584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.473 [2024-11-15 10:43:32.777615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.473 [2024-11-15 10:43:32.777632] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.473 "name": "raid_bdev1", 00:15:02.473 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:02.473 "strip_size_kb": 0, 00:15:02.473 "state": "online", 00:15:02.473 "raid_level": "raid1", 00:15:02.473 "superblock": true, 00:15:02.473 "num_base_bdevs": 2, 00:15:02.473 "num_base_bdevs_discovered": 1, 00:15:02.473 "num_base_bdevs_operational": 1, 00:15:02.473 "base_bdevs_list": [ 00:15:02.473 { 00:15:02.473 "name": null, 00:15:02.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.473 "is_configured": false, 00:15:02.473 "data_offset": 0, 00:15:02.473 "data_size": 63488 00:15:02.473 }, 00:15:02.473 { 00:15:02.473 "name": "BaseBdev2", 00:15:02.473 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:02.473 "is_configured": true, 00:15:02.473 "data_offset": 2048, 00:15:02.473 "data_size": 63488 00:15:02.473 } 00:15:02.473 ] 00:15:02.473 }' 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.473 10:43:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.041 "name": "raid_bdev1", 00:15:03.041 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:03.041 "strip_size_kb": 0, 00:15:03.041 "state": "online", 00:15:03.041 "raid_level": "raid1", 00:15:03.041 "superblock": true, 00:15:03.041 "num_base_bdevs": 2, 00:15:03.041 "num_base_bdevs_discovered": 1, 00:15:03.041 "num_base_bdevs_operational": 1, 00:15:03.041 "base_bdevs_list": [ 00:15:03.041 { 00:15:03.041 "name": null, 00:15:03.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.041 "is_configured": false, 00:15:03.041 "data_offset": 0, 00:15:03.041 "data_size": 63488 00:15:03.041 }, 00:15:03.041 { 00:15:03.041 "name": "BaseBdev2", 00:15:03.041 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:03.041 "is_configured": true, 00:15:03.041 "data_offset": 2048, 00:15:03.041 "data_size": 63488 00:15:03.041 } 00:15:03.041 ] 00:15:03.041 }' 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.041 [2024-11-15 10:43:33.488405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.041 [2024-11-15 10:43:33.502446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:03.041 [2024-11-15 10:43:33.504836] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.041 10:43:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:03.977 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.977 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.977 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.977 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.977 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.977 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.977 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.977 10:43:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.977 10:43:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.977 10:43:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.236 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.236 "name": "raid_bdev1", 00:15:04.236 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:04.236 "strip_size_kb": 0, 00:15:04.236 "state": "online", 00:15:04.236 "raid_level": "raid1", 00:15:04.236 "superblock": true, 00:15:04.236 "num_base_bdevs": 2, 00:15:04.236 "num_base_bdevs_discovered": 2, 00:15:04.236 "num_base_bdevs_operational": 2, 00:15:04.236 "process": { 00:15:04.236 "type": "rebuild", 00:15:04.236 "target": "spare", 00:15:04.236 "progress": { 00:15:04.236 "blocks": 20480, 00:15:04.236 "percent": 32 00:15:04.236 } 00:15:04.236 }, 00:15:04.236 "base_bdevs_list": [ 00:15:04.236 { 00:15:04.236 "name": "spare", 00:15:04.236 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:04.236 "is_configured": true, 00:15:04.236 "data_offset": 2048, 00:15:04.236 "data_size": 63488 00:15:04.236 }, 00:15:04.236 { 00:15:04.236 "name": "BaseBdev2", 00:15:04.236 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:04.236 "is_configured": true, 00:15:04.236 "data_offset": 2048, 00:15:04.236 "data_size": 63488 00:15:04.236 } 00:15:04.236 ] 00:15:04.236 }' 00:15:04.236 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.236 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.236 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:04.237 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=408 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.237 "name": "raid_bdev1", 00:15:04.237 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:04.237 "strip_size_kb": 0, 00:15:04.237 "state": "online", 00:15:04.237 "raid_level": "raid1", 00:15:04.237 "superblock": true, 00:15:04.237 "num_base_bdevs": 2, 00:15:04.237 "num_base_bdevs_discovered": 2, 00:15:04.237 "num_base_bdevs_operational": 2, 00:15:04.237 "process": { 00:15:04.237 "type": "rebuild", 00:15:04.237 "target": "spare", 00:15:04.237 "progress": { 00:15:04.237 "blocks": 22528, 00:15:04.237 "percent": 35 00:15:04.237 } 00:15:04.237 }, 00:15:04.237 "base_bdevs_list": [ 00:15:04.237 { 00:15:04.237 "name": "spare", 00:15:04.237 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:04.237 "is_configured": true, 00:15:04.237 "data_offset": 2048, 00:15:04.237 "data_size": 63488 00:15:04.237 }, 00:15:04.237 { 00:15:04.237 "name": "BaseBdev2", 00:15:04.237 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:04.237 "is_configured": true, 00:15:04.237 "data_offset": 2048, 00:15:04.237 "data_size": 63488 00:15:04.237 } 00:15:04.237 ] 00:15:04.237 }' 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.237 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.496 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.496 10:43:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.432 "name": "raid_bdev1", 00:15:05.432 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:05.432 "strip_size_kb": 0, 00:15:05.432 "state": "online", 00:15:05.432 "raid_level": "raid1", 00:15:05.432 "superblock": true, 00:15:05.432 "num_base_bdevs": 2, 00:15:05.432 "num_base_bdevs_discovered": 2, 00:15:05.432 "num_base_bdevs_operational": 2, 00:15:05.432 "process": { 00:15:05.432 "type": "rebuild", 00:15:05.432 "target": "spare", 00:15:05.432 "progress": { 00:15:05.432 "blocks": 47104, 00:15:05.432 "percent": 74 00:15:05.432 } 00:15:05.432 }, 00:15:05.432 "base_bdevs_list": [ 00:15:05.432 { 00:15:05.432 "name": "spare", 00:15:05.432 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:05.432 "is_configured": true, 00:15:05.432 "data_offset": 2048, 00:15:05.432 "data_size": 63488 00:15:05.432 }, 00:15:05.432 { 00:15:05.432 "name": "BaseBdev2", 00:15:05.432 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:05.432 "is_configured": true, 00:15:05.432 "data_offset": 2048, 00:15:05.432 "data_size": 63488 00:15:05.432 } 00:15:05.432 ] 00:15:05.432 }' 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.432 10:43:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.691 10:43:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.691 10:43:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.259 [2024-11-15 10:43:36.621692] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:06.259 [2024-11-15 10:43:36.621793] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:06.259 [2024-11-15 10:43:36.621942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.517 "name": "raid_bdev1", 00:15:06.517 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:06.517 "strip_size_kb": 0, 00:15:06.517 "state": "online", 00:15:06.517 "raid_level": "raid1", 00:15:06.517 "superblock": true, 00:15:06.517 "num_base_bdevs": 2, 00:15:06.517 "num_base_bdevs_discovered": 2, 00:15:06.517 "num_base_bdevs_operational": 2, 00:15:06.517 "base_bdevs_list": [ 00:15:06.517 { 00:15:06.517 "name": "spare", 00:15:06.517 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:06.517 "is_configured": true, 00:15:06.517 "data_offset": 2048, 00:15:06.517 "data_size": 63488 00:15:06.517 }, 00:15:06.517 { 00:15:06.517 "name": "BaseBdev2", 00:15:06.517 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:06.517 "is_configured": true, 00:15:06.517 "data_offset": 2048, 00:15:06.517 "data_size": 63488 00:15:06.517 } 00:15:06.517 ] 00:15:06.517 }' 00:15:06.517 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.776 "name": "raid_bdev1", 00:15:06.776 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:06.776 "strip_size_kb": 0, 00:15:06.776 "state": "online", 00:15:06.776 "raid_level": "raid1", 00:15:06.776 "superblock": true, 00:15:06.776 "num_base_bdevs": 2, 00:15:06.776 "num_base_bdevs_discovered": 2, 00:15:06.776 "num_base_bdevs_operational": 2, 00:15:06.776 "base_bdevs_list": [ 00:15:06.776 { 00:15:06.776 "name": "spare", 00:15:06.776 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:06.776 "is_configured": true, 00:15:06.776 "data_offset": 2048, 00:15:06.776 "data_size": 63488 00:15:06.776 }, 00:15:06.776 { 00:15:06.776 "name": "BaseBdev2", 00:15:06.776 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:06.776 "is_configured": true, 00:15:06.776 "data_offset": 2048, 00:15:06.776 "data_size": 63488 00:15:06.776 } 00:15:06.776 ] 00:15:06.776 }' 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.776 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.035 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.035 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.035 "name": "raid_bdev1", 00:15:07.035 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:07.035 "strip_size_kb": 0, 00:15:07.035 "state": "online", 00:15:07.035 "raid_level": "raid1", 00:15:07.035 "superblock": true, 00:15:07.035 "num_base_bdevs": 2, 00:15:07.035 "num_base_bdevs_discovered": 2, 00:15:07.035 "num_base_bdevs_operational": 2, 00:15:07.035 "base_bdevs_list": [ 00:15:07.035 { 00:15:07.035 "name": "spare", 00:15:07.035 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:07.035 "is_configured": true, 00:15:07.035 "data_offset": 2048, 00:15:07.035 "data_size": 63488 00:15:07.035 }, 00:15:07.035 { 00:15:07.035 "name": "BaseBdev2", 00:15:07.035 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:07.035 "is_configured": true, 00:15:07.035 "data_offset": 2048, 00:15:07.035 "data_size": 63488 00:15:07.035 } 00:15:07.035 ] 00:15:07.035 }' 00:15:07.035 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.035 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.294 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.294 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.294 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.294 [2024-11-15 10:43:37.844257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.294 [2024-11-15 10:43:37.844296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.294 [2024-11-15 10:43:37.844411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.294 [2024-11-15 10:43:37.844504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.294 [2024-11-15 10:43:37.844524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:07.294 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.294 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.294 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.552 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:07.552 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.552 10:43:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.552 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:07.552 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:07.552 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:07.552 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:07.552 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.552 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:07.552 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.553 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.553 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.553 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:07.553 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.553 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.553 10:43:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:07.811 /dev/nbd0 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.811 1+0 records in 00:15:07.811 1+0 records out 00:15:07.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251802 s, 16.3 MB/s 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.811 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:08.070 /dev/nbd1 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.070 1+0 records in 00:15:08.070 1+0 records out 00:15:08.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377061 s, 10.9 MB/s 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:08.070 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.071 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:08.071 10:43:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:08.071 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.071 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:08.071 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:08.329 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:08.330 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.330 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:08.330 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.330 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:08.330 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.330 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:08.588 10:43:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:08.588 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:08.588 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:08.588 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.588 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.588 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:08.588 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:08.588 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.588 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.588 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.846 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.105 [2024-11-15 10:43:39.410884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:09.105 [2024-11-15 10:43:39.410951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.105 [2024-11-15 10:43:39.410997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:09.105 [2024-11-15 10:43:39.411013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.105 [2024-11-15 10:43:39.413736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.105 [2024-11-15 10:43:39.413782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:09.105 [2024-11-15 10:43:39.413934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:09.105 [2024-11-15 10:43:39.413999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.105 spare 00:15:09.105 [2024-11-15 10:43:39.414152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.105 [2024-11-15 10:43:39.514302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:09.105 [2024-11-15 10:43:39.514561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:09.105 [2024-11-15 10:43:39.515107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:15:09.105 [2024-11-15 10:43:39.515516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:09.105 [2024-11-15 10:43:39.515660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:09.105 [2024-11-15 10:43:39.516105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.105 "name": "raid_bdev1", 00:15:09.105 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:09.105 "strip_size_kb": 0, 00:15:09.105 "state": "online", 00:15:09.105 "raid_level": "raid1", 00:15:09.105 "superblock": true, 00:15:09.105 "num_base_bdevs": 2, 00:15:09.105 "num_base_bdevs_discovered": 2, 00:15:09.105 "num_base_bdevs_operational": 2, 00:15:09.105 "base_bdevs_list": [ 00:15:09.105 { 00:15:09.105 "name": "spare", 00:15:09.105 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:09.105 "is_configured": true, 00:15:09.105 "data_offset": 2048, 00:15:09.105 "data_size": 63488 00:15:09.105 }, 00:15:09.105 { 00:15:09.105 "name": "BaseBdev2", 00:15:09.105 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:09.105 "is_configured": true, 00:15:09.105 "data_offset": 2048, 00:15:09.105 "data_size": 63488 00:15:09.105 } 00:15:09.105 ] 00:15:09.105 }' 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.105 10:43:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.672 "name": "raid_bdev1", 00:15:09.672 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:09.672 "strip_size_kb": 0, 00:15:09.672 "state": "online", 00:15:09.672 "raid_level": "raid1", 00:15:09.672 "superblock": true, 00:15:09.672 "num_base_bdevs": 2, 00:15:09.672 "num_base_bdevs_discovered": 2, 00:15:09.672 "num_base_bdevs_operational": 2, 00:15:09.672 "base_bdevs_list": [ 00:15:09.672 { 00:15:09.672 "name": "spare", 00:15:09.672 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:09.672 "is_configured": true, 00:15:09.672 "data_offset": 2048, 00:15:09.672 "data_size": 63488 00:15:09.672 }, 00:15:09.672 { 00:15:09.672 "name": "BaseBdev2", 00:15:09.672 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:09.672 "is_configured": true, 00:15:09.672 "data_offset": 2048, 00:15:09.672 "data_size": 63488 00:15:09.672 } 00:15:09.672 ] 00:15:09.672 }' 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:09.672 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.931 [2024-11-15 10:43:40.244279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.931 "name": "raid_bdev1", 00:15:09.931 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:09.931 "strip_size_kb": 0, 00:15:09.931 "state": "online", 00:15:09.931 "raid_level": "raid1", 00:15:09.931 "superblock": true, 00:15:09.931 "num_base_bdevs": 2, 00:15:09.931 "num_base_bdevs_discovered": 1, 00:15:09.931 "num_base_bdevs_operational": 1, 00:15:09.931 "base_bdevs_list": [ 00:15:09.931 { 00:15:09.931 "name": null, 00:15:09.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.931 "is_configured": false, 00:15:09.931 "data_offset": 0, 00:15:09.931 "data_size": 63488 00:15:09.931 }, 00:15:09.931 { 00:15:09.931 "name": "BaseBdev2", 00:15:09.931 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:09.931 "is_configured": true, 00:15:09.931 "data_offset": 2048, 00:15:09.931 "data_size": 63488 00:15:09.931 } 00:15:09.931 ] 00:15:09.931 }' 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.931 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.498 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.498 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.498 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.498 [2024-11-15 10:43:40.784452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.498 [2024-11-15 10:43:40.784844] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:10.498 [2024-11-15 10:43:40.784879] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:10.498 [2024-11-15 10:43:40.784930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.498 [2024-11-15 10:43:40.798759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:15:10.498 10:43:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.498 10:43:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:10.498 [2024-11-15 10:43:40.801100] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.434 "name": "raid_bdev1", 00:15:11.434 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:11.434 "strip_size_kb": 0, 00:15:11.434 "state": "online", 00:15:11.434 "raid_level": "raid1", 00:15:11.434 "superblock": true, 00:15:11.434 "num_base_bdevs": 2, 00:15:11.434 "num_base_bdevs_discovered": 2, 00:15:11.434 "num_base_bdevs_operational": 2, 00:15:11.434 "process": { 00:15:11.434 "type": "rebuild", 00:15:11.434 "target": "spare", 00:15:11.434 "progress": { 00:15:11.434 "blocks": 20480, 00:15:11.434 "percent": 32 00:15:11.434 } 00:15:11.434 }, 00:15:11.434 "base_bdevs_list": [ 00:15:11.434 { 00:15:11.434 "name": "spare", 00:15:11.434 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:11.434 "is_configured": true, 00:15:11.434 "data_offset": 2048, 00:15:11.434 "data_size": 63488 00:15:11.434 }, 00:15:11.434 { 00:15:11.434 "name": "BaseBdev2", 00:15:11.434 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:11.434 "is_configured": true, 00:15:11.434 "data_offset": 2048, 00:15:11.434 "data_size": 63488 00:15:11.434 } 00:15:11.434 ] 00:15:11.434 }' 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.434 10:43:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.434 [2024-11-15 10:43:41.958892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.693 [2024-11-15 10:43:42.007801] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.693 [2024-11-15 10:43:42.007899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.693 [2024-11-15 10:43:42.007925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.693 [2024-11-15 10:43:42.007941] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.693 "name": "raid_bdev1", 00:15:11.693 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:11.693 "strip_size_kb": 0, 00:15:11.693 "state": "online", 00:15:11.693 "raid_level": "raid1", 00:15:11.693 "superblock": true, 00:15:11.693 "num_base_bdevs": 2, 00:15:11.693 "num_base_bdevs_discovered": 1, 00:15:11.693 "num_base_bdevs_operational": 1, 00:15:11.693 "base_bdevs_list": [ 00:15:11.693 { 00:15:11.693 "name": null, 00:15:11.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.693 "is_configured": false, 00:15:11.693 "data_offset": 0, 00:15:11.693 "data_size": 63488 00:15:11.693 }, 00:15:11.693 { 00:15:11.693 "name": "BaseBdev2", 00:15:11.693 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:11.693 "is_configured": true, 00:15:11.693 "data_offset": 2048, 00:15:11.693 "data_size": 63488 00:15:11.693 } 00:15:11.693 ] 00:15:11.693 }' 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.693 10:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.259 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.259 10:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.259 10:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.259 [2024-11-15 10:43:42.566077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.259 [2024-11-15 10:43:42.566169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.259 [2024-11-15 10:43:42.566201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:12.259 [2024-11-15 10:43:42.566219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.259 [2024-11-15 10:43:42.566788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.259 [2024-11-15 10:43:42.566828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.259 [2024-11-15 10:43:42.566945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:12.259 [2024-11-15 10:43:42.566998] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:12.259 [2024-11-15 10:43:42.567018] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:12.259 [2024-11-15 10:43:42.567053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.259 spare 00:15:12.259 [2024-11-15 10:43:42.580754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:12.259 10:43:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.259 10:43:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:12.259 [2024-11-15 10:43:42.583053] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.204 "name": "raid_bdev1", 00:15:13.204 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:13.204 "strip_size_kb": 0, 00:15:13.204 "state": "online", 00:15:13.204 "raid_level": "raid1", 00:15:13.204 "superblock": true, 00:15:13.204 "num_base_bdevs": 2, 00:15:13.204 "num_base_bdevs_discovered": 2, 00:15:13.204 "num_base_bdevs_operational": 2, 00:15:13.204 "process": { 00:15:13.204 "type": "rebuild", 00:15:13.204 "target": "spare", 00:15:13.204 "progress": { 00:15:13.204 "blocks": 20480, 00:15:13.204 "percent": 32 00:15:13.204 } 00:15:13.204 }, 00:15:13.204 "base_bdevs_list": [ 00:15:13.204 { 00:15:13.204 "name": "spare", 00:15:13.204 "uuid": "a8056b22-ae01-5b86-89b4-0642a52fc0f7", 00:15:13.204 "is_configured": true, 00:15:13.204 "data_offset": 2048, 00:15:13.204 "data_size": 63488 00:15:13.204 }, 00:15:13.204 { 00:15:13.204 "name": "BaseBdev2", 00:15:13.204 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:13.204 "is_configured": true, 00:15:13.204 "data_offset": 2048, 00:15:13.204 "data_size": 63488 00:15:13.204 } 00:15:13.204 ] 00:15:13.204 }' 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.204 10:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.204 [2024-11-15 10:43:43.736950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.463 [2024-11-15 10:43:43.789917] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.463 [2024-11-15 10:43:43.790001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.463 [2024-11-15 10:43:43.790030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.463 [2024-11-15 10:43:43.790043] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.463 "name": "raid_bdev1", 00:15:13.463 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:13.463 "strip_size_kb": 0, 00:15:13.463 "state": "online", 00:15:13.463 "raid_level": "raid1", 00:15:13.463 "superblock": true, 00:15:13.463 "num_base_bdevs": 2, 00:15:13.463 "num_base_bdevs_discovered": 1, 00:15:13.463 "num_base_bdevs_operational": 1, 00:15:13.463 "base_bdevs_list": [ 00:15:13.463 { 00:15:13.463 "name": null, 00:15:13.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.463 "is_configured": false, 00:15:13.463 "data_offset": 0, 00:15:13.463 "data_size": 63488 00:15:13.463 }, 00:15:13.463 { 00:15:13.463 "name": "BaseBdev2", 00:15:13.463 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:13.463 "is_configured": true, 00:15:13.463 "data_offset": 2048, 00:15:13.463 "data_size": 63488 00:15:13.463 } 00:15:13.463 ] 00:15:13.463 }' 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.463 10:43:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.029 "name": "raid_bdev1", 00:15:14.029 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:14.029 "strip_size_kb": 0, 00:15:14.029 "state": "online", 00:15:14.029 "raid_level": "raid1", 00:15:14.029 "superblock": true, 00:15:14.029 "num_base_bdevs": 2, 00:15:14.029 "num_base_bdevs_discovered": 1, 00:15:14.029 "num_base_bdevs_operational": 1, 00:15:14.029 "base_bdevs_list": [ 00:15:14.029 { 00:15:14.029 "name": null, 00:15:14.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.029 "is_configured": false, 00:15:14.029 "data_offset": 0, 00:15:14.029 "data_size": 63488 00:15:14.029 }, 00:15:14.029 { 00:15:14.029 "name": "BaseBdev2", 00:15:14.029 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:14.029 "is_configured": true, 00:15:14.029 "data_offset": 2048, 00:15:14.029 "data_size": 63488 00:15:14.029 } 00:15:14.029 ] 00:15:14.029 }' 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 [2024-11-15 10:43:44.480193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:14.029 [2024-11-15 10:43:44.480411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.029 [2024-11-15 10:43:44.480578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:14.029 [2024-11-15 10:43:44.480706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.029 [2024-11-15 10:43:44.481290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.029 [2024-11-15 10:43:44.481459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.029 [2024-11-15 10:43:44.481585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:14.029 [2024-11-15 10:43:44.481608] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:14.029 [2024-11-15 10:43:44.481622] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:14.029 [2024-11-15 10:43:44.481634] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:14.029 BaseBdev1 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.029 10:43:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.962 10:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.219 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.219 "name": "raid_bdev1", 00:15:15.219 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:15.219 "strip_size_kb": 0, 00:15:15.219 "state": "online", 00:15:15.219 "raid_level": "raid1", 00:15:15.219 "superblock": true, 00:15:15.219 "num_base_bdevs": 2, 00:15:15.219 "num_base_bdevs_discovered": 1, 00:15:15.219 "num_base_bdevs_operational": 1, 00:15:15.219 "base_bdevs_list": [ 00:15:15.219 { 00:15:15.219 "name": null, 00:15:15.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.219 "is_configured": false, 00:15:15.219 "data_offset": 0, 00:15:15.219 "data_size": 63488 00:15:15.219 }, 00:15:15.219 { 00:15:15.219 "name": "BaseBdev2", 00:15:15.219 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:15.219 "is_configured": true, 00:15:15.219 "data_offset": 2048, 00:15:15.219 "data_size": 63488 00:15:15.219 } 00:15:15.219 ] 00:15:15.219 }' 00:15:15.219 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.219 10:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.477 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.477 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.477 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.477 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.477 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.477 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.477 10:43:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.477 10:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.477 10:43:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.477 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.735 "name": "raid_bdev1", 00:15:15.735 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:15.735 "strip_size_kb": 0, 00:15:15.735 "state": "online", 00:15:15.735 "raid_level": "raid1", 00:15:15.735 "superblock": true, 00:15:15.735 "num_base_bdevs": 2, 00:15:15.735 "num_base_bdevs_discovered": 1, 00:15:15.735 "num_base_bdevs_operational": 1, 00:15:15.735 "base_bdevs_list": [ 00:15:15.735 { 00:15:15.735 "name": null, 00:15:15.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.735 "is_configured": false, 00:15:15.735 "data_offset": 0, 00:15:15.735 "data_size": 63488 00:15:15.735 }, 00:15:15.735 { 00:15:15.735 "name": "BaseBdev2", 00:15:15.735 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:15.735 "is_configured": true, 00:15:15.735 "data_offset": 2048, 00:15:15.735 "data_size": 63488 00:15:15.735 } 00:15:15.735 ] 00:15:15.735 }' 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.735 [2024-11-15 10:43:46.180782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.735 [2024-11-15 10:43:46.181124] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:15.735 [2024-11-15 10:43:46.181158] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:15.735 request: 00:15:15.735 { 00:15:15.735 "base_bdev": "BaseBdev1", 00:15:15.735 "raid_bdev": "raid_bdev1", 00:15:15.735 "method": "bdev_raid_add_base_bdev", 00:15:15.735 "req_id": 1 00:15:15.735 } 00:15:15.735 Got JSON-RPC error response 00:15:15.735 response: 00:15:15.735 { 00:15:15.735 "code": -22, 00:15:15.735 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:15.735 } 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:15.735 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:15.736 10:43:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:15.736 10:43:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.669 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.926 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.926 "name": "raid_bdev1", 00:15:16.926 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:16.926 "strip_size_kb": 0, 00:15:16.926 "state": "online", 00:15:16.926 "raid_level": "raid1", 00:15:16.926 "superblock": true, 00:15:16.926 "num_base_bdevs": 2, 00:15:16.926 "num_base_bdevs_discovered": 1, 00:15:16.926 "num_base_bdevs_operational": 1, 00:15:16.926 "base_bdevs_list": [ 00:15:16.926 { 00:15:16.926 "name": null, 00:15:16.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.926 "is_configured": false, 00:15:16.926 "data_offset": 0, 00:15:16.926 "data_size": 63488 00:15:16.926 }, 00:15:16.926 { 00:15:16.926 "name": "BaseBdev2", 00:15:16.926 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:16.926 "is_configured": true, 00:15:16.926 "data_offset": 2048, 00:15:16.926 "data_size": 63488 00:15:16.926 } 00:15:16.926 ] 00:15:16.926 }' 00:15:16.926 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.926 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.185 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.185 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.185 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.185 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.185 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.185 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.185 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.185 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.185 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.185 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.444 "name": "raid_bdev1", 00:15:17.444 "uuid": "daec45ee-c36b-4873-b2c8-f793e16b7524", 00:15:17.444 "strip_size_kb": 0, 00:15:17.444 "state": "online", 00:15:17.444 "raid_level": "raid1", 00:15:17.444 "superblock": true, 00:15:17.444 "num_base_bdevs": 2, 00:15:17.444 "num_base_bdevs_discovered": 1, 00:15:17.444 "num_base_bdevs_operational": 1, 00:15:17.444 "base_bdevs_list": [ 00:15:17.444 { 00:15:17.444 "name": null, 00:15:17.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.444 "is_configured": false, 00:15:17.444 "data_offset": 0, 00:15:17.444 "data_size": 63488 00:15:17.444 }, 00:15:17.444 { 00:15:17.444 "name": "BaseBdev2", 00:15:17.444 "uuid": "11661829-d08c-505a-975f-ab659326dcf4", 00:15:17.444 "is_configured": true, 00:15:17.444 "data_offset": 2048, 00:15:17.444 "data_size": 63488 00:15:17.444 } 00:15:17.444 ] 00:15:17.444 }' 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76084 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 76084 ']' 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 76084 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76084 00:15:17.444 killing process with pid 76084 00:15:17.444 Received shutdown signal, test time was about 60.000000 seconds 00:15:17.444 00:15:17.444 Latency(us) 00:15:17.444 [2024-11-15T10:43:48.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.444 [2024-11-15T10:43:48.004Z] =================================================================================================================== 00:15:17.444 [2024-11-15T10:43:48.004Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76084' 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 76084 00:15:17.444 [2024-11-15 10:43:47.874265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.444 10:43:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 76084 00:15:17.444 [2024-11-15 10:43:47.874445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.444 [2024-11-15 10:43:47.874515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.444 [2024-11-15 10:43:47.874536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:17.703 [2024-11-15 10:43:48.128672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:18.638 00:15:18.638 real 0m26.898s 00:15:18.638 user 0m33.296s 00:15:18.638 sys 0m3.756s 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:18.638 ************************************ 00:15:18.638 END TEST raid_rebuild_test_sb 00:15:18.638 ************************************ 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.638 10:43:49 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:15:18.638 10:43:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:18.638 10:43:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:18.638 10:43:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.638 ************************************ 00:15:18.638 START TEST raid_rebuild_test_io 00:15:18.638 ************************************ 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:18.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76851 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76851 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76851 ']' 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:18.638 10:43:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.896 [2024-11-15 10:43:49.279703] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:15:18.896 [2024-11-15 10:43:49.280116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76851 ] 00:15:18.896 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:18.896 Zero copy mechanism will not be used. 00:15:19.155 [2024-11-15 10:43:49.458078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.155 [2024-11-15 10:43:49.560335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.415 [2024-11-15 10:43:49.739317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.415 [2024-11-15 10:43:49.739383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 BaseBdev1_malloc 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 [2024-11-15 10:43:50.281385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:20.058 [2024-11-15 10:43:50.281614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.058 [2024-11-15 10:43:50.281770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:20.058 [2024-11-15 10:43:50.281804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.058 [2024-11-15 10:43:50.284458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.058 [2024-11-15 10:43:50.284510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:20.058 BaseBdev1 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 BaseBdev2_malloc 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 [2024-11-15 10:43:50.328947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:20.058 [2024-11-15 10:43:50.329177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.058 [2024-11-15 10:43:50.329221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:20.058 [2024-11-15 10:43:50.329240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.058 [2024-11-15 10:43:50.331826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.058 [2024-11-15 10:43:50.331891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:20.058 BaseBdev2 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 spare_malloc 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 spare_delay 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 [2024-11-15 10:43:50.412463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.058 [2024-11-15 10:43:50.412760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.058 [2024-11-15 10:43:50.412817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:20.058 [2024-11-15 10:43:50.412850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.058 [2024-11-15 10:43:50.416586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.058 [2024-11-15 10:43:50.416659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.058 spare 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 [2024-11-15 10:43:50.420959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.058 [2024-11-15 10:43:50.424248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.058 [2024-11-15 10:43:50.424619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:20.058 [2024-11-15 10:43:50.424814] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:20.058 [2024-11-15 10:43:50.425283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:20.058 [2024-11-15 10:43:50.425658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:20.058 [2024-11-15 10:43:50.425690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:20.058 [2024-11-15 10:43:50.426039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.058 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.059 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.059 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.059 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.059 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.059 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.059 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.059 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.059 "name": "raid_bdev1", 00:15:20.059 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:20.059 "strip_size_kb": 0, 00:15:20.059 "state": "online", 00:15:20.059 "raid_level": "raid1", 00:15:20.059 "superblock": false, 00:15:20.059 "num_base_bdevs": 2, 00:15:20.059 "num_base_bdevs_discovered": 2, 00:15:20.059 "num_base_bdevs_operational": 2, 00:15:20.059 "base_bdevs_list": [ 00:15:20.059 { 00:15:20.059 "name": "BaseBdev1", 00:15:20.059 "uuid": "55677d8d-d890-5887-ab85-e6380058d5d3", 00:15:20.059 "is_configured": true, 00:15:20.059 "data_offset": 0, 00:15:20.059 "data_size": 65536 00:15:20.059 }, 00:15:20.059 { 00:15:20.059 "name": "BaseBdev2", 00:15:20.059 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:20.059 "is_configured": true, 00:15:20.059 "data_offset": 0, 00:15:20.059 "data_size": 65536 00:15:20.059 } 00:15:20.059 ] 00:15:20.059 }' 00:15:20.059 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.059 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.349 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:20.349 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:20.349 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.349 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.349 [2024-11-15 10:43:50.902399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.607 [2024-11-15 10:43:50.994053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.607 10:43:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.607 10:43:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.607 10:43:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.607 10:43:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.607 10:43:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.607 "name": "raid_bdev1", 00:15:20.607 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:20.607 "strip_size_kb": 0, 00:15:20.607 "state": "online", 00:15:20.607 "raid_level": "raid1", 00:15:20.607 "superblock": false, 00:15:20.607 "num_base_bdevs": 2, 00:15:20.607 "num_base_bdevs_discovered": 1, 00:15:20.607 "num_base_bdevs_operational": 1, 00:15:20.607 "base_bdevs_list": [ 00:15:20.607 { 00:15:20.607 "name": null, 00:15:20.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.607 "is_configured": false, 00:15:20.607 "data_offset": 0, 00:15:20.607 "data_size": 65536 00:15:20.607 }, 00:15:20.607 { 00:15:20.607 "name": "BaseBdev2", 00:15:20.607 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:20.607 "is_configured": true, 00:15:20.607 "data_offset": 0, 00:15:20.607 "data_size": 65536 00:15:20.607 } 00:15:20.607 ] 00:15:20.607 }' 00:15:20.607 10:43:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.607 10:43:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.607 [2024-11-15 10:43:51.093231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:20.607 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:20.607 Zero copy mechanism will not be used. 00:15:20.607 Running I/O for 60 seconds... 00:15:21.174 10:43:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.174 10:43:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.174 10:43:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.174 [2024-11-15 10:43:51.461527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.174 10:43:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.174 10:43:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:21.174 [2024-11-15 10:43:51.560814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:21.174 [2024-11-15 10:43:51.563404] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.174 [2024-11-15 10:43:51.673790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:21.174 [2024-11-15 10:43:51.674456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:21.431 [2024-11-15 10:43:51.893963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:21.431 [2024-11-15 10:43:51.894528] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:21.689 173.00 IOPS, 519.00 MiB/s [2024-11-15T10:43:52.249Z] [2024-11-15 10:43:52.242663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:21.689 [2024-11-15 10:43:52.243143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:21.947 [2024-11-15 10:43:52.453172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:21.947 [2024-11-15 10:43:52.453485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:22.204 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.204 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.205 "name": "raid_bdev1", 00:15:22.205 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:22.205 "strip_size_kb": 0, 00:15:22.205 "state": "online", 00:15:22.205 "raid_level": "raid1", 00:15:22.205 "superblock": false, 00:15:22.205 "num_base_bdevs": 2, 00:15:22.205 "num_base_bdevs_discovered": 2, 00:15:22.205 "num_base_bdevs_operational": 2, 00:15:22.205 "process": { 00:15:22.205 "type": "rebuild", 00:15:22.205 "target": "spare", 00:15:22.205 "progress": { 00:15:22.205 "blocks": 10240, 00:15:22.205 "percent": 15 00:15:22.205 } 00:15:22.205 }, 00:15:22.205 "base_bdevs_list": [ 00:15:22.205 { 00:15:22.205 "name": "spare", 00:15:22.205 "uuid": "a1f9bd9d-90fe-5469-b3a6-e0c3e0c29aa9", 00:15:22.205 "is_configured": true, 00:15:22.205 "data_offset": 0, 00:15:22.205 "data_size": 65536 00:15:22.205 }, 00:15:22.205 { 00:15:22.205 "name": "BaseBdev2", 00:15:22.205 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:22.205 "is_configured": true, 00:15:22.205 "data_offset": 0, 00:15:22.205 "data_size": 65536 00:15:22.205 } 00:15:22.205 ] 00:15:22.205 }' 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.205 10:43:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.205 [2024-11-15 10:43:52.695809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.463 [2024-11-15 10:43:52.875819] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:22.463 [2024-11-15 10:43:52.893726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.463 [2024-11-15 10:43:52.893782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.463 [2024-11-15 10:43:52.893805] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:22.463 [2024-11-15 10:43:52.943663] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.463 10:43:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.463 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.463 "name": "raid_bdev1", 00:15:22.463 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:22.463 "strip_size_kb": 0, 00:15:22.463 "state": "online", 00:15:22.463 "raid_level": "raid1", 00:15:22.463 "superblock": false, 00:15:22.463 "num_base_bdevs": 2, 00:15:22.463 "num_base_bdevs_discovered": 1, 00:15:22.463 "num_base_bdevs_operational": 1, 00:15:22.463 "base_bdevs_list": [ 00:15:22.463 { 00:15:22.463 "name": null, 00:15:22.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.463 "is_configured": false, 00:15:22.463 "data_offset": 0, 00:15:22.463 "data_size": 65536 00:15:22.463 }, 00:15:22.463 { 00:15:22.463 "name": "BaseBdev2", 00:15:22.463 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:22.463 "is_configured": true, 00:15:22.463 "data_offset": 0, 00:15:22.463 "data_size": 65536 00:15:22.463 } 00:15:22.463 ] 00:15:22.463 }' 00:15:22.463 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.463 10:43:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.977 127.00 IOPS, 381.00 MiB/s [2024-11-15T10:43:53.537Z] 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.977 "name": "raid_bdev1", 00:15:22.977 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:22.977 "strip_size_kb": 0, 00:15:22.977 "state": "online", 00:15:22.977 "raid_level": "raid1", 00:15:22.977 "superblock": false, 00:15:22.977 "num_base_bdevs": 2, 00:15:22.977 "num_base_bdevs_discovered": 1, 00:15:22.977 "num_base_bdevs_operational": 1, 00:15:22.977 "base_bdevs_list": [ 00:15:22.977 { 00:15:22.977 "name": null, 00:15:22.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.977 "is_configured": false, 00:15:22.977 "data_offset": 0, 00:15:22.977 "data_size": 65536 00:15:22.977 }, 00:15:22.977 { 00:15:22.977 "name": "BaseBdev2", 00:15:22.977 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:22.977 "is_configured": true, 00:15:22.977 "data_offset": 0, 00:15:22.977 "data_size": 65536 00:15:22.977 } 00:15:22.977 ] 00:15:22.977 }' 00:15:22.977 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.235 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.235 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.235 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.235 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:23.235 10:43:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.235 10:43:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.235 [2024-11-15 10:43:53.645687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.235 10:43:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.235 10:43:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:23.235 [2024-11-15 10:43:53.696563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:23.235 [2024-11-15 10:43:53.698823] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.493 [2024-11-15 10:43:53.815784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:23.493 [2024-11-15 10:43:53.816520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:23.493 [2024-11-15 10:43:53.936561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:23.493 [2024-11-15 10:43:53.936896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:23.750 158.67 IOPS, 476.00 MiB/s [2024-11-15T10:43:54.310Z] [2024-11-15 10:43:54.185913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:24.009 [2024-11-15 10:43:54.413168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:24.009 [2024-11-15 10:43:54.413504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:24.269 [2024-11-15 10:43:54.672976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.269 "name": "raid_bdev1", 00:15:24.269 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:24.269 "strip_size_kb": 0, 00:15:24.269 "state": "online", 00:15:24.269 "raid_level": "raid1", 00:15:24.269 "superblock": false, 00:15:24.269 "num_base_bdevs": 2, 00:15:24.269 "num_base_bdevs_discovered": 2, 00:15:24.269 "num_base_bdevs_operational": 2, 00:15:24.269 "process": { 00:15:24.269 "type": "rebuild", 00:15:24.269 "target": "spare", 00:15:24.269 "progress": { 00:15:24.269 "blocks": 14336, 00:15:24.269 "percent": 21 00:15:24.269 } 00:15:24.269 }, 00:15:24.269 "base_bdevs_list": [ 00:15:24.269 { 00:15:24.269 "name": "spare", 00:15:24.269 "uuid": "a1f9bd9d-90fe-5469-b3a6-e0c3e0c29aa9", 00:15:24.269 "is_configured": true, 00:15:24.269 "data_offset": 0, 00:15:24.269 "data_size": 65536 00:15:24.269 }, 00:15:24.269 { 00:15:24.269 "name": "BaseBdev2", 00:15:24.269 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:24.269 "is_configured": true, 00:15:24.269 "data_offset": 0, 00:15:24.269 "data_size": 65536 00:15:24.269 } 00:15:24.269 ] 00:15:24.269 }' 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.269 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=428 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.528 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.528 "name": "raid_bdev1", 00:15:24.528 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:24.528 "strip_size_kb": 0, 00:15:24.528 "state": "online", 00:15:24.528 "raid_level": "raid1", 00:15:24.528 "superblock": false, 00:15:24.528 "num_base_bdevs": 2, 00:15:24.528 "num_base_bdevs_discovered": 2, 00:15:24.528 "num_base_bdevs_operational": 2, 00:15:24.528 "process": { 00:15:24.528 "type": "rebuild", 00:15:24.528 "target": "spare", 00:15:24.528 "progress": { 00:15:24.528 "blocks": 14336, 00:15:24.529 "percent": 21 00:15:24.529 } 00:15:24.529 }, 00:15:24.529 "base_bdevs_list": [ 00:15:24.529 { 00:15:24.529 "name": "spare", 00:15:24.529 "uuid": "a1f9bd9d-90fe-5469-b3a6-e0c3e0c29aa9", 00:15:24.529 "is_configured": true, 00:15:24.529 "data_offset": 0, 00:15:24.529 "data_size": 65536 00:15:24.529 }, 00:15:24.529 { 00:15:24.529 "name": "BaseBdev2", 00:15:24.529 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:24.529 "is_configured": true, 00:15:24.529 "data_offset": 0, 00:15:24.529 "data_size": 65536 00:15:24.529 } 00:15:24.529 ] 00:15:24.529 }' 00:15:24.529 [2024-11-15 10:43:54.901036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:24.529 [2024-11-15 10:43:54.901514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:24.529 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.529 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.529 10:43:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.529 10:43:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.529 10:43:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.047 139.00 IOPS, 417.00 MiB/s [2024-11-15T10:43:55.607Z] [2024-11-15 10:43:55.475327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:25.305 [2024-11-15 10:43:55.781863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:25.305 [2024-11-15 10:43:55.782369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:25.564 [2024-11-15 10:43:56.010129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.564 "name": "raid_bdev1", 00:15:25.564 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:25.564 "strip_size_kb": 0, 00:15:25.564 "state": "online", 00:15:25.564 "raid_level": "raid1", 00:15:25.564 "superblock": false, 00:15:25.564 "num_base_bdevs": 2, 00:15:25.564 "num_base_bdevs_discovered": 2, 00:15:25.564 "num_base_bdevs_operational": 2, 00:15:25.564 "process": { 00:15:25.564 "type": "rebuild", 00:15:25.564 "target": "spare", 00:15:25.564 "progress": { 00:15:25.564 "blocks": 34816, 00:15:25.564 "percent": 53 00:15:25.564 } 00:15:25.564 }, 00:15:25.564 "base_bdevs_list": [ 00:15:25.564 { 00:15:25.564 "name": "spare", 00:15:25.564 "uuid": "a1f9bd9d-90fe-5469-b3a6-e0c3e0c29aa9", 00:15:25.564 "is_configured": true, 00:15:25.564 "data_offset": 0, 00:15:25.564 "data_size": 65536 00:15:25.564 }, 00:15:25.564 { 00:15:25.564 "name": "BaseBdev2", 00:15:25.564 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:25.564 "is_configured": true, 00:15:25.564 "data_offset": 0, 00:15:25.564 "data_size": 65536 00:15:25.564 } 00:15:25.564 ] 00:15:25.564 }' 00:15:25.564 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.565 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.565 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.824 124.20 IOPS, 372.60 MiB/s [2024-11-15T10:43:56.384Z] 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.824 10:43:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.824 [2024-11-15 10:43:56.365867] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:26.391 [2024-11-15 10:43:56.701232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:26.651 110.67 IOPS, 332.00 MiB/s [2024-11-15T10:43:57.211Z] 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.651 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.651 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.651 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.651 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.651 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.651 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.651 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.651 10:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.651 10:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.651 10:43:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.960 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.960 "name": "raid_bdev1", 00:15:26.960 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:26.960 "strip_size_kb": 0, 00:15:26.960 "state": "online", 00:15:26.960 "raid_level": "raid1", 00:15:26.960 "superblock": false, 00:15:26.960 "num_base_bdevs": 2, 00:15:26.960 "num_base_bdevs_discovered": 2, 00:15:26.960 "num_base_bdevs_operational": 2, 00:15:26.960 "process": { 00:15:26.960 "type": "rebuild", 00:15:26.960 "target": "spare", 00:15:26.960 "progress": { 00:15:26.960 "blocks": 53248, 00:15:26.960 "percent": 81 00:15:26.960 } 00:15:26.960 }, 00:15:26.960 "base_bdevs_list": [ 00:15:26.960 { 00:15:26.960 "name": "spare", 00:15:26.960 "uuid": "a1f9bd9d-90fe-5469-b3a6-e0c3e0c29aa9", 00:15:26.960 "is_configured": true, 00:15:26.960 "data_offset": 0, 00:15:26.960 "data_size": 65536 00:15:26.960 }, 00:15:26.960 { 00:15:26.960 "name": "BaseBdev2", 00:15:26.960 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:26.960 "is_configured": true, 00:15:26.960 "data_offset": 0, 00:15:26.960 "data_size": 65536 00:15:26.960 } 00:15:26.960 ] 00:15:26.960 }' 00:15:26.960 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.960 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.960 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.960 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.960 10:43:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.548 [2024-11-15 10:43:57.822999] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:27.548 [2024-11-15 10:43:57.930845] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:27.548 [2024-11-15 10:43:57.933038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.807 98.86 IOPS, 296.57 MiB/s [2024-11-15T10:43:58.367Z] 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.807 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.807 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.807 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.807 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.807 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.807 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.807 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.807 10:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.807 10:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.807 10:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.065 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.065 "name": "raid_bdev1", 00:15:28.066 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:28.066 "strip_size_kb": 0, 00:15:28.066 "state": "online", 00:15:28.066 "raid_level": "raid1", 00:15:28.066 "superblock": false, 00:15:28.066 "num_base_bdevs": 2, 00:15:28.066 "num_base_bdevs_discovered": 2, 00:15:28.066 "num_base_bdevs_operational": 2, 00:15:28.066 "base_bdevs_list": [ 00:15:28.066 { 00:15:28.066 "name": "spare", 00:15:28.066 "uuid": "a1f9bd9d-90fe-5469-b3a6-e0c3e0c29aa9", 00:15:28.066 "is_configured": true, 00:15:28.066 "data_offset": 0, 00:15:28.066 "data_size": 65536 00:15:28.066 }, 00:15:28.066 { 00:15:28.066 "name": "BaseBdev2", 00:15:28.066 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:28.066 "is_configured": true, 00:15:28.066 "data_offset": 0, 00:15:28.066 "data_size": 65536 00:15:28.066 } 00:15:28.066 ] 00:15:28.066 }' 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.066 "name": "raid_bdev1", 00:15:28.066 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:28.066 "strip_size_kb": 0, 00:15:28.066 "state": "online", 00:15:28.066 "raid_level": "raid1", 00:15:28.066 "superblock": false, 00:15:28.066 "num_base_bdevs": 2, 00:15:28.066 "num_base_bdevs_discovered": 2, 00:15:28.066 "num_base_bdevs_operational": 2, 00:15:28.066 "base_bdevs_list": [ 00:15:28.066 { 00:15:28.066 "name": "spare", 00:15:28.066 "uuid": "a1f9bd9d-90fe-5469-b3a6-e0c3e0c29aa9", 00:15:28.066 "is_configured": true, 00:15:28.066 "data_offset": 0, 00:15:28.066 "data_size": 65536 00:15:28.066 }, 00:15:28.066 { 00:15:28.066 "name": "BaseBdev2", 00:15:28.066 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:28.066 "is_configured": true, 00:15:28.066 "data_offset": 0, 00:15:28.066 "data_size": 65536 00:15:28.066 } 00:15:28.066 ] 00:15:28.066 }' 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.066 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.325 "name": "raid_bdev1", 00:15:28.325 "uuid": "5d793457-9354-478a-bc4f-cbe605153fcc", 00:15:28.325 "strip_size_kb": 0, 00:15:28.325 "state": "online", 00:15:28.325 "raid_level": "raid1", 00:15:28.325 "superblock": false, 00:15:28.325 "num_base_bdevs": 2, 00:15:28.325 "num_base_bdevs_discovered": 2, 00:15:28.325 "num_base_bdevs_operational": 2, 00:15:28.325 "base_bdevs_list": [ 00:15:28.325 { 00:15:28.325 "name": "spare", 00:15:28.325 "uuid": "a1f9bd9d-90fe-5469-b3a6-e0c3e0c29aa9", 00:15:28.325 "is_configured": true, 00:15:28.325 "data_offset": 0, 00:15:28.325 "data_size": 65536 00:15:28.325 }, 00:15:28.325 { 00:15:28.325 "name": "BaseBdev2", 00:15:28.325 "uuid": "881e2720-9a62-53b7-9363-7adcfcb932b8", 00:15:28.325 "is_configured": true, 00:15:28.325 "data_offset": 0, 00:15:28.325 "data_size": 65536 00:15:28.325 } 00:15:28.325 ] 00:15:28.325 }' 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.325 10:43:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.584 90.62 IOPS, 271.88 MiB/s [2024-11-15T10:43:59.144Z] 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:28.584 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.584 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.584 [2024-11-15 10:43:59.114789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.584 [2024-11-15 10:43:59.114974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.584 00:15:28.584 Latency(us) 00:15:28.584 [2024-11-15T10:43:59.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.584 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:28.584 raid_bdev1 : 8.04 90.40 271.20 0.00 0.00 14211.11 286.72 110100.48 00:15:28.584 [2024-11-15T10:43:59.144Z] =================================================================================================================== 00:15:28.584 [2024-11-15T10:43:59.144Z] Total : 90.40 271.20 0.00 0.00 14211.11 286.72 110100.48 00:15:28.843 [2024-11-15 10:43:59.157777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.843 [2024-11-15 10:43:59.158060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.843 [2024-11-15 10:43:59.158184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.843 [2024-11-15 10:43:59.158202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:28.843 { 00:15:28.843 "results": [ 00:15:28.843 { 00:15:28.843 "job": "raid_bdev1", 00:15:28.843 "core_mask": "0x1", 00:15:28.843 "workload": "randrw", 00:15:28.843 "percentage": 50, 00:15:28.843 "status": "finished", 00:15:28.843 "queue_depth": 2, 00:15:28.843 "io_size": 3145728, 00:15:28.843 "runtime": 8.041937, 00:15:28.843 "iops": 90.40110610167675, 00:15:28.843 "mibps": 271.2033183050302, 00:15:28.843 "io_failed": 0, 00:15:28.843 "io_timeout": 0, 00:15:28.843 "avg_latency_us": 14211.107085156933, 00:15:28.843 "min_latency_us": 286.72, 00:15:28.843 "max_latency_us": 110100.48 00:15:28.843 } 00:15:28.843 ], 00:15:28.843 "core_count": 1 00:15:28.843 } 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:28.843 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:29.102 /dev/nbd0 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.102 1+0 records in 00:15:29.102 1+0 records out 00:15:29.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362928 s, 11.3 MB/s 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:29.102 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:29.361 /dev/nbd1 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.361 1+0 records in 00:15:29.361 1+0 records out 00:15:29.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285632 s, 14.3 MB/s 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:29.361 10:43:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:29.619 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:29.619 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.619 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:29.619 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:29.619 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:29.619 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.619 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.878 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76851 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76851 ']' 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76851 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:30.137 10:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76851 00:15:30.396 killing process with pid 76851 00:15:30.396 Received shutdown signal, test time was about 9.614289 seconds 00:15:30.396 00:15:30.396 Latency(us) 00:15:30.396 [2024-11-15T10:44:00.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.396 [2024-11-15T10:44:00.956Z] =================================================================================================================== 00:15:30.396 [2024-11-15T10:44:00.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.396 10:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:30.396 10:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:30.396 10:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76851' 00:15:30.396 10:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76851 00:15:30.396 [2024-11-15 10:44:00.710023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.396 10:44:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76851 00:15:30.396 [2024-11-15 10:44:00.901498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:31.773 00:15:31.773 real 0m12.763s 00:15:31.773 user 0m16.756s 00:15:31.773 sys 0m1.263s 00:15:31.773 ************************************ 00:15:31.773 END TEST raid_rebuild_test_io 00:15:31.773 ************************************ 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.773 10:44:01 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:15:31.773 10:44:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:31.773 10:44:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:31.773 10:44:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.773 ************************************ 00:15:31.773 START TEST raid_rebuild_test_sb_io 00:15:31.773 ************************************ 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77231 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77231 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77231 ']' 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:31.773 10:44:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.773 [2024-11-15 10:44:02.095491] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:15:31.773 [2024-11-15 10:44:02.095872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77231 ] 00:15:31.773 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:31.773 Zero copy mechanism will not be used. 00:15:31.773 [2024-11-15 10:44:02.270539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.033 [2024-11-15 10:44:02.372387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.033 [2024-11-15 10:44:02.551836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.033 [2024-11-15 10:44:02.552083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.601 BaseBdev1_malloc 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.601 [2024-11-15 10:44:03.125205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:32.601 [2024-11-15 10:44:03.125447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.601 [2024-11-15 10:44:03.125638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:32.601 [2024-11-15 10:44:03.125773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.601 [2024-11-15 10:44:03.128446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.601 [2024-11-15 10:44:03.128617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:32.601 BaseBdev1 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.601 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.860 BaseBdev2_malloc 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.860 [2024-11-15 10:44:03.172527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:32.860 [2024-11-15 10:44:03.172735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.860 [2024-11-15 10:44:03.172814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:32.860 [2024-11-15 10:44:03.172928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.860 [2024-11-15 10:44:03.175639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.860 [2024-11-15 10:44:03.175814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:32.860 BaseBdev2 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.860 spare_malloc 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.860 spare_delay 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.860 [2024-11-15 10:44:03.237771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:32.860 [2024-11-15 10:44:03.237849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.860 [2024-11-15 10:44:03.237880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:32.860 [2024-11-15 10:44:03.237898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.860 [2024-11-15 10:44:03.240518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.860 [2024-11-15 10:44:03.240569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:32.860 spare 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.860 [2024-11-15 10:44:03.245837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.860 [2024-11-15 10:44:03.248095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.860 [2024-11-15 10:44:03.248335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:32.860 [2024-11-15 10:44:03.248376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:32.860 [2024-11-15 10:44:03.248700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:32.860 [2024-11-15 10:44:03.248921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:32.860 [2024-11-15 10:44:03.248946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:32.860 [2024-11-15 10:44:03.249133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.860 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.861 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.861 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.861 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.861 "name": "raid_bdev1", 00:15:32.861 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:32.861 "strip_size_kb": 0, 00:15:32.861 "state": "online", 00:15:32.861 "raid_level": "raid1", 00:15:32.861 "superblock": true, 00:15:32.861 "num_base_bdevs": 2, 00:15:32.861 "num_base_bdevs_discovered": 2, 00:15:32.861 "num_base_bdevs_operational": 2, 00:15:32.861 "base_bdevs_list": [ 00:15:32.861 { 00:15:32.861 "name": "BaseBdev1", 00:15:32.861 "uuid": "91f950e9-80a5-59f3-8250-a166983ed4fe", 00:15:32.861 "is_configured": true, 00:15:32.861 "data_offset": 2048, 00:15:32.861 "data_size": 63488 00:15:32.861 }, 00:15:32.861 { 00:15:32.861 "name": "BaseBdev2", 00:15:32.861 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:32.861 "is_configured": true, 00:15:32.861 "data_offset": 2048, 00:15:32.861 "data_size": 63488 00:15:32.861 } 00:15:32.861 ] 00:15:32.861 }' 00:15:32.861 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.861 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.428 [2024-11-15 10:44:03.782295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.428 [2024-11-15 10:44:03.889965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.428 "name": "raid_bdev1", 00:15:33.428 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:33.428 "strip_size_kb": 0, 00:15:33.428 "state": "online", 00:15:33.428 "raid_level": "raid1", 00:15:33.428 "superblock": true, 00:15:33.428 "num_base_bdevs": 2, 00:15:33.428 "num_base_bdevs_discovered": 1, 00:15:33.428 "num_base_bdevs_operational": 1, 00:15:33.428 "base_bdevs_list": [ 00:15:33.428 { 00:15:33.428 "name": null, 00:15:33.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.428 "is_configured": false, 00:15:33.428 "data_offset": 0, 00:15:33.428 "data_size": 63488 00:15:33.428 }, 00:15:33.428 { 00:15:33.428 "name": "BaseBdev2", 00:15:33.428 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:33.428 "is_configured": true, 00:15:33.428 "data_offset": 2048, 00:15:33.428 "data_size": 63488 00:15:33.428 } 00:15:33.428 ] 00:15:33.428 }' 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.428 10:44:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.687 [2024-11-15 10:44:04.016987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:33.687 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:33.687 Zero copy mechanism will not be used. 00:15:33.687 Running I/O for 60 seconds... 00:15:33.945 10:44:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:33.945 10:44:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.945 10:44:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.945 [2024-11-15 10:44:04.419592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.945 10:44:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.945 10:44:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:33.945 [2024-11-15 10:44:04.487180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:33.945 [2024-11-15 10:44:04.489467] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:34.204 [2024-11-15 10:44:04.622750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:34.204 [2024-11-15 10:44:04.623280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:34.204 [2024-11-15 10:44:04.757031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:34.204 [2024-11-15 10:44:04.757321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:34.771 195.00 IOPS, 585.00 MiB/s [2024-11-15T10:44:05.331Z] [2024-11-15 10:44:05.119320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:34.771 [2024-11-15 10:44:05.119834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:34.771 [2024-11-15 10:44:05.247885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.030 [2024-11-15 10:44:05.511631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.030 "name": "raid_bdev1", 00:15:35.030 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:35.030 "strip_size_kb": 0, 00:15:35.030 "state": "online", 00:15:35.030 "raid_level": "raid1", 00:15:35.030 "superblock": true, 00:15:35.030 "num_base_bdevs": 2, 00:15:35.030 "num_base_bdevs_discovered": 2, 00:15:35.030 "num_base_bdevs_operational": 2, 00:15:35.030 "process": { 00:15:35.030 "type": "rebuild", 00:15:35.030 "target": "spare", 00:15:35.030 "progress": { 00:15:35.030 "blocks": 12288, 00:15:35.030 "percent": 19 00:15:35.030 } 00:15:35.030 }, 00:15:35.030 "base_bdevs_list": [ 00:15:35.030 { 00:15:35.030 "name": "spare", 00:15:35.030 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:35.030 "is_configured": true, 00:15:35.030 "data_offset": 2048, 00:15:35.030 "data_size": 63488 00:15:35.030 }, 00:15:35.030 { 00:15:35.030 "name": "BaseBdev2", 00:15:35.030 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:35.030 "is_configured": true, 00:15:35.030 "data_offset": 2048, 00:15:35.030 "data_size": 63488 00:15:35.030 } 00:15:35.030 ] 00:15:35.030 }' 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.030 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.289 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.289 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:35.289 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.289 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.289 [2024-11-15 10:44:05.632105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.289 [2024-11-15 10:44:05.664157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:35.289 [2024-11-15 10:44:05.672148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:35.289 [2024-11-15 10:44:05.789127] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:35.289 [2024-11-15 10:44:05.799200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.289 [2024-11-15 10:44:05.799261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.289 [2024-11-15 10:44:05.799287] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:35.549 [2024-11-15 10:44:05.848863] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.549 "name": "raid_bdev1", 00:15:35.549 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:35.549 "strip_size_kb": 0, 00:15:35.549 "state": "online", 00:15:35.549 "raid_level": "raid1", 00:15:35.549 "superblock": true, 00:15:35.549 "num_base_bdevs": 2, 00:15:35.549 "num_base_bdevs_discovered": 1, 00:15:35.549 "num_base_bdevs_operational": 1, 00:15:35.549 "base_bdevs_list": [ 00:15:35.549 { 00:15:35.549 "name": null, 00:15:35.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.549 "is_configured": false, 00:15:35.549 "data_offset": 0, 00:15:35.549 "data_size": 63488 00:15:35.549 }, 00:15:35.549 { 00:15:35.549 "name": "BaseBdev2", 00:15:35.549 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:35.549 "is_configured": true, 00:15:35.549 "data_offset": 2048, 00:15:35.549 "data_size": 63488 00:15:35.549 } 00:15:35.549 ] 00:15:35.549 }' 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.549 10:44:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.117 151.00 IOPS, 453.00 MiB/s [2024-11-15T10:44:06.677Z] 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.117 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.117 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.117 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.117 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.117 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.117 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.118 "name": "raid_bdev1", 00:15:36.118 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:36.118 "strip_size_kb": 0, 00:15:36.118 "state": "online", 00:15:36.118 "raid_level": "raid1", 00:15:36.118 "superblock": true, 00:15:36.118 "num_base_bdevs": 2, 00:15:36.118 "num_base_bdevs_discovered": 1, 00:15:36.118 "num_base_bdevs_operational": 1, 00:15:36.118 "base_bdevs_list": [ 00:15:36.118 { 00:15:36.118 "name": null, 00:15:36.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.118 "is_configured": false, 00:15:36.118 "data_offset": 0, 00:15:36.118 "data_size": 63488 00:15:36.118 }, 00:15:36.118 { 00:15:36.118 "name": "BaseBdev2", 00:15:36.118 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:36.118 "is_configured": true, 00:15:36.118 "data_offset": 2048, 00:15:36.118 "data_size": 63488 00:15:36.118 } 00:15:36.118 ] 00:15:36.118 }' 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.118 [2024-11-15 10:44:06.527945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.118 10:44:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:36.118 [2024-11-15 10:44:06.594682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:36.118 [2024-11-15 10:44:06.597009] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.377 [2024-11-15 10:44:06.732322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:36.377 [2024-11-15 10:44:06.859769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:36.377 [2024-11-15 10:44:06.860027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:36.635 164.33 IOPS, 493.00 MiB/s [2024-11-15T10:44:07.195Z] [2024-11-15 10:44:07.182888] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:36.893 [2024-11-15 10:44:07.310609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.152 "name": "raid_bdev1", 00:15:37.152 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:37.152 "strip_size_kb": 0, 00:15:37.152 "state": "online", 00:15:37.152 "raid_level": "raid1", 00:15:37.152 "superblock": true, 00:15:37.152 "num_base_bdevs": 2, 00:15:37.152 "num_base_bdevs_discovered": 2, 00:15:37.152 "num_base_bdevs_operational": 2, 00:15:37.152 "process": { 00:15:37.152 "type": "rebuild", 00:15:37.152 "target": "spare", 00:15:37.152 "progress": { 00:15:37.152 "blocks": 12288, 00:15:37.152 "percent": 19 00:15:37.152 } 00:15:37.152 }, 00:15:37.152 "base_bdevs_list": [ 00:15:37.152 { 00:15:37.152 "name": "spare", 00:15:37.152 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:37.152 "is_configured": true, 00:15:37.152 "data_offset": 2048, 00:15:37.152 "data_size": 63488 00:15:37.152 }, 00:15:37.152 { 00:15:37.152 "name": "BaseBdev2", 00:15:37.152 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:37.152 "is_configured": true, 00:15:37.152 "data_offset": 2048, 00:15:37.152 "data_size": 63488 00:15:37.152 } 00:15:37.152 ] 00:15:37.152 }' 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.152 [2024-11-15 10:44:07.663439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.152 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.411 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.411 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:37.411 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:37.411 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:37.411 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:37.411 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:37.411 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:37.411 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=441 00:15:37.411 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.412 "name": "raid_bdev1", 00:15:37.412 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:37.412 "strip_size_kb": 0, 00:15:37.412 "state": "online", 00:15:37.412 "raid_level": "raid1", 00:15:37.412 "superblock": true, 00:15:37.412 "num_base_bdevs": 2, 00:15:37.412 "num_base_bdevs_discovered": 2, 00:15:37.412 "num_base_bdevs_operational": 2, 00:15:37.412 "process": { 00:15:37.412 "type": "rebuild", 00:15:37.412 "target": "spare", 00:15:37.412 "progress": { 00:15:37.412 "blocks": 14336, 00:15:37.412 "percent": 22 00:15:37.412 } 00:15:37.412 }, 00:15:37.412 "base_bdevs_list": [ 00:15:37.412 { 00:15:37.412 "name": "spare", 00:15:37.412 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:37.412 "is_configured": true, 00:15:37.412 "data_offset": 2048, 00:15:37.412 "data_size": 63488 00:15:37.412 }, 00:15:37.412 { 00:15:37.412 "name": "BaseBdev2", 00:15:37.412 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:37.412 "is_configured": true, 00:15:37.412 "data_offset": 2048, 00:15:37.412 "data_size": 63488 00:15:37.412 } 00:15:37.412 ] 00:15:37.412 }' 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.412 [2024-11-15 10:44:07.798476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.412 10:44:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.670 147.50 IOPS, 442.50 MiB/s [2024-11-15T10:44:08.230Z] [2024-11-15 10:44:08.129759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:37.670 [2024-11-15 10:44:08.130259] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:37.929 [2024-11-15 10:44:08.234846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:38.188 [2024-11-15 10:44:08.573380] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.447 "name": "raid_bdev1", 00:15:38.447 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:38.447 "strip_size_kb": 0, 00:15:38.447 "state": "online", 00:15:38.447 "raid_level": "raid1", 00:15:38.447 "superblock": true, 00:15:38.447 "num_base_bdevs": 2, 00:15:38.447 "num_base_bdevs_discovered": 2, 00:15:38.447 "num_base_bdevs_operational": 2, 00:15:38.447 "process": { 00:15:38.447 "type": "rebuild", 00:15:38.447 "target": "spare", 00:15:38.447 "progress": { 00:15:38.447 "blocks": 28672, 00:15:38.447 "percent": 45 00:15:38.447 } 00:15:38.447 }, 00:15:38.447 "base_bdevs_list": [ 00:15:38.447 { 00:15:38.447 "name": "spare", 00:15:38.447 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:38.447 "is_configured": true, 00:15:38.447 "data_offset": 2048, 00:15:38.447 "data_size": 63488 00:15:38.447 }, 00:15:38.447 { 00:15:38.447 "name": "BaseBdev2", 00:15:38.447 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:38.447 "is_configured": true, 00:15:38.447 "data_offset": 2048, 00:15:38.447 "data_size": 63488 00:15:38.447 } 00:15:38.447 ] 00:15:38.447 }' 00:15:38.447 10:44:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.706 [2024-11-15 10:44:09.013201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:38.706 10:44:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.706 10:44:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.706 129.20 IOPS, 387.60 MiB/s [2024-11-15T10:44:09.266Z] 10:44:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.706 10:44:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.706 [2024-11-15 10:44:09.131449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:38.965 [2024-11-15 10:44:09.484967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:39.223 [2024-11-15 10:44:09.612449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:39.741 114.67 IOPS, 344.00 MiB/s [2024-11-15T10:44:10.301Z] 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.741 "name": "raid_bdev1", 00:15:39.741 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:39.741 "strip_size_kb": 0, 00:15:39.741 "state": "online", 00:15:39.741 "raid_level": "raid1", 00:15:39.741 "superblock": true, 00:15:39.741 "num_base_bdevs": 2, 00:15:39.741 "num_base_bdevs_discovered": 2, 00:15:39.741 "num_base_bdevs_operational": 2, 00:15:39.741 "process": { 00:15:39.741 "type": "rebuild", 00:15:39.741 "target": "spare", 00:15:39.741 "progress": { 00:15:39.741 "blocks": 49152, 00:15:39.741 "percent": 77 00:15:39.741 } 00:15:39.741 }, 00:15:39.741 "base_bdevs_list": [ 00:15:39.741 { 00:15:39.741 "name": "spare", 00:15:39.741 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:39.741 "is_configured": true, 00:15:39.741 "data_offset": 2048, 00:15:39.741 "data_size": 63488 00:15:39.741 }, 00:15:39.741 { 00:15:39.741 "name": "BaseBdev2", 00:15:39.741 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:39.741 "is_configured": true, 00:15:39.741 "data_offset": 2048, 00:15:39.741 "data_size": 63488 00:15:39.741 } 00:15:39.741 ] 00:15:39.741 }' 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.741 10:44:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.308 [2024-11-15 10:44:10.649097] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:40.567 [2024-11-15 10:44:10.876225] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:40.567 [2024-11-15 10:44:10.983918] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:40.567 [2024-11-15 10:44:10.985997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.841 104.71 IOPS, 314.14 MiB/s [2024-11-15T10:44:11.401Z] 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.841 "name": "raid_bdev1", 00:15:40.841 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:40.841 "strip_size_kb": 0, 00:15:40.841 "state": "online", 00:15:40.841 "raid_level": "raid1", 00:15:40.841 "superblock": true, 00:15:40.841 "num_base_bdevs": 2, 00:15:40.841 "num_base_bdevs_discovered": 2, 00:15:40.841 "num_base_bdevs_operational": 2, 00:15:40.841 "base_bdevs_list": [ 00:15:40.841 { 00:15:40.841 "name": "spare", 00:15:40.841 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:40.841 "is_configured": true, 00:15:40.841 "data_offset": 2048, 00:15:40.841 "data_size": 63488 00:15:40.841 }, 00:15:40.841 { 00:15:40.841 "name": "BaseBdev2", 00:15:40.841 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:40.841 "is_configured": true, 00:15:40.841 "data_offset": 2048, 00:15:40.841 "data_size": 63488 00:15:40.841 } 00:15:40.841 ] 00:15:40.841 }' 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:40.841 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.129 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.129 "name": "raid_bdev1", 00:15:41.129 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:41.129 "strip_size_kb": 0, 00:15:41.129 "state": "online", 00:15:41.129 "raid_level": "raid1", 00:15:41.129 "superblock": true, 00:15:41.129 "num_base_bdevs": 2, 00:15:41.129 "num_base_bdevs_discovered": 2, 00:15:41.129 "num_base_bdevs_operational": 2, 00:15:41.130 "base_bdevs_list": [ 00:15:41.130 { 00:15:41.130 "name": "spare", 00:15:41.130 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:41.130 "is_configured": true, 00:15:41.130 "data_offset": 2048, 00:15:41.130 "data_size": 63488 00:15:41.130 }, 00:15:41.130 { 00:15:41.130 "name": "BaseBdev2", 00:15:41.130 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:41.130 "is_configured": true, 00:15:41.130 "data_offset": 2048, 00:15:41.130 "data_size": 63488 00:15:41.130 } 00:15:41.130 ] 00:15:41.130 }' 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.130 "name": "raid_bdev1", 00:15:41.130 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:41.130 "strip_size_kb": 0, 00:15:41.130 "state": "online", 00:15:41.130 "raid_level": "raid1", 00:15:41.130 "superblock": true, 00:15:41.130 "num_base_bdevs": 2, 00:15:41.130 "num_base_bdevs_discovered": 2, 00:15:41.130 "num_base_bdevs_operational": 2, 00:15:41.130 "base_bdevs_list": [ 00:15:41.130 { 00:15:41.130 "name": "spare", 00:15:41.130 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:41.130 "is_configured": true, 00:15:41.130 "data_offset": 2048, 00:15:41.130 "data_size": 63488 00:15:41.130 }, 00:15:41.130 { 00:15:41.130 "name": "BaseBdev2", 00:15:41.130 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:41.130 "is_configured": true, 00:15:41.130 "data_offset": 2048, 00:15:41.130 "data_size": 63488 00:15:41.130 } 00:15:41.130 ] 00:15:41.130 }' 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.130 10:44:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.697 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:41.697 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.697 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.697 [2024-11-15 10:44:12.025772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.697 [2024-11-15 10:44:12.025815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.697 95.50 IOPS, 286.50 MiB/s 00:15:41.697 Latency(us) 00:15:41.697 [2024-11-15T10:44:12.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.697 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:41.697 raid_bdev1 : 8.06 95.06 285.17 0.00 0.00 13566.35 294.17 118203.11 00:15:41.697 [2024-11-15T10:44:12.257Z] =================================================================================================================== 00:15:41.697 [2024-11-15T10:44:12.257Z] Total : 95.06 285.17 0.00 0.00 13566.35 294.17 118203.11 00:15:41.697 [2024-11-15 10:44:12.097316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.697 [2024-11-15 10:44:12.097452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.697 [2024-11-15 10:44:12.097562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.697 [2024-11-15 10:44:12.097579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:41.697 { 00:15:41.697 "results": [ 00:15:41.697 { 00:15:41.697 "job": "raid_bdev1", 00:15:41.697 "core_mask": "0x1", 00:15:41.697 "workload": "randrw", 00:15:41.697 "percentage": 50, 00:15:41.697 "status": "finished", 00:15:41.697 "queue_depth": 2, 00:15:41.697 "io_size": 3145728, 00:15:41.697 "runtime": 8.058326, 00:15:41.697 "iops": 95.05696344377232, 00:15:41.697 "mibps": 285.17089033131697, 00:15:41.697 "io_failed": 0, 00:15:41.697 "io_timeout": 0, 00:15:41.697 "avg_latency_us": 13566.349641585568, 00:15:41.697 "min_latency_us": 294.16727272727275, 00:15:41.697 "max_latency_us": 118203.11272727273 00:15:41.697 } 00:15:41.697 ], 00:15:41.697 "core_count": 1 00:15:41.697 } 00:15:41.697 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.697 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.697 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:41.697 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.697 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.697 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:41.698 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:41.956 /dev/nbd0 00:15:41.956 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:41.956 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.957 1+0 records in 00:15:41.957 1+0 records out 00:15:41.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035624 s, 11.5 MB/s 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:41.957 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:42.524 /dev/nbd1 00:15:42.524 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:42.524 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:42.524 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:42.524 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.525 1+0 records in 00:15:42.525 1+0 records out 00:15:42.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366448 s, 11.2 MB/s 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.525 10:44:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:42.525 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:42.525 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.525 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:42.525 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:42.525 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:42.525 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.525 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.092 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:43.351 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:43.351 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:43.351 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:43.351 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.351 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.351 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:43.351 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:43.351 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.351 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:43.351 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.352 [2024-11-15 10:44:13.693082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.352 [2024-11-15 10:44:13.693159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.352 [2024-11-15 10:44:13.693198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:43.352 [2024-11-15 10:44:13.693214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.352 [2024-11-15 10:44:13.696053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.352 [2024-11-15 10:44:13.696101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.352 [2024-11-15 10:44:13.696234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:43.352 [2024-11-15 10:44:13.696305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.352 [2024-11-15 10:44:13.696537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.352 spare 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.352 [2024-11-15 10:44:13.796685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:43.352 [2024-11-15 10:44:13.796763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:43.352 [2024-11-15 10:44:13.797178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:43.352 [2024-11-15 10:44:13.797465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:43.352 [2024-11-15 10:44:13.797493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:43.352 [2024-11-15 10:44:13.797751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.352 "name": "raid_bdev1", 00:15:43.352 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:43.352 "strip_size_kb": 0, 00:15:43.352 "state": "online", 00:15:43.352 "raid_level": "raid1", 00:15:43.352 "superblock": true, 00:15:43.352 "num_base_bdevs": 2, 00:15:43.352 "num_base_bdevs_discovered": 2, 00:15:43.352 "num_base_bdevs_operational": 2, 00:15:43.352 "base_bdevs_list": [ 00:15:43.352 { 00:15:43.352 "name": "spare", 00:15:43.352 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:43.352 "is_configured": true, 00:15:43.352 "data_offset": 2048, 00:15:43.352 "data_size": 63488 00:15:43.352 }, 00:15:43.352 { 00:15:43.352 "name": "BaseBdev2", 00:15:43.352 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:43.352 "is_configured": true, 00:15:43.352 "data_offset": 2048, 00:15:43.352 "data_size": 63488 00:15:43.352 } 00:15:43.352 ] 00:15:43.352 }' 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.352 10:44:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.919 "name": "raid_bdev1", 00:15:43.919 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:43.919 "strip_size_kb": 0, 00:15:43.919 "state": "online", 00:15:43.919 "raid_level": "raid1", 00:15:43.919 "superblock": true, 00:15:43.919 "num_base_bdevs": 2, 00:15:43.919 "num_base_bdevs_discovered": 2, 00:15:43.919 "num_base_bdevs_operational": 2, 00:15:43.919 "base_bdevs_list": [ 00:15:43.919 { 00:15:43.919 "name": "spare", 00:15:43.919 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:43.919 "is_configured": true, 00:15:43.919 "data_offset": 2048, 00:15:43.919 "data_size": 63488 00:15:43.919 }, 00:15:43.919 { 00:15:43.919 "name": "BaseBdev2", 00:15:43.919 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:43.919 "is_configured": true, 00:15:43.919 "data_offset": 2048, 00:15:43.919 "data_size": 63488 00:15:43.919 } 00:15:43.919 ] 00:15:43.919 }' 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:43.919 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.178 [2024-11-15 10:44:14.518045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.178 "name": "raid_bdev1", 00:15:44.178 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:44.178 "strip_size_kb": 0, 00:15:44.178 "state": "online", 00:15:44.178 "raid_level": "raid1", 00:15:44.178 "superblock": true, 00:15:44.178 "num_base_bdevs": 2, 00:15:44.178 "num_base_bdevs_discovered": 1, 00:15:44.178 "num_base_bdevs_operational": 1, 00:15:44.178 "base_bdevs_list": [ 00:15:44.178 { 00:15:44.178 "name": null, 00:15:44.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.178 "is_configured": false, 00:15:44.178 "data_offset": 0, 00:15:44.178 "data_size": 63488 00:15:44.178 }, 00:15:44.178 { 00:15:44.178 "name": "BaseBdev2", 00:15:44.178 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:44.178 "is_configured": true, 00:15:44.178 "data_offset": 2048, 00:15:44.178 "data_size": 63488 00:15:44.178 } 00:15:44.178 ] 00:15:44.178 }' 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.178 10:44:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.746 10:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:44.746 10:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.746 10:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.746 [2024-11-15 10:44:15.062276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.746 [2024-11-15 10:44:15.062527] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:44.746 [2024-11-15 10:44:15.062564] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:44.746 [2024-11-15 10:44:15.062619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.746 [2024-11-15 10:44:15.076728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:44.746 10:44:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.746 10:44:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:44.746 [2024-11-15 10:44:15.079012] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.683 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.683 "name": "raid_bdev1", 00:15:45.683 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:45.683 "strip_size_kb": 0, 00:15:45.683 "state": "online", 00:15:45.683 "raid_level": "raid1", 00:15:45.683 "superblock": true, 00:15:45.683 "num_base_bdevs": 2, 00:15:45.683 "num_base_bdevs_discovered": 2, 00:15:45.683 "num_base_bdevs_operational": 2, 00:15:45.683 "process": { 00:15:45.683 "type": "rebuild", 00:15:45.683 "target": "spare", 00:15:45.683 "progress": { 00:15:45.683 "blocks": 20480, 00:15:45.683 "percent": 32 00:15:45.683 } 00:15:45.683 }, 00:15:45.683 "base_bdevs_list": [ 00:15:45.683 { 00:15:45.683 "name": "spare", 00:15:45.683 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:45.683 "is_configured": true, 00:15:45.683 "data_offset": 2048, 00:15:45.683 "data_size": 63488 00:15:45.683 }, 00:15:45.683 { 00:15:45.683 "name": "BaseBdev2", 00:15:45.683 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:45.684 "is_configured": true, 00:15:45.684 "data_offset": 2048, 00:15:45.684 "data_size": 63488 00:15:45.684 } 00:15:45.684 ] 00:15:45.684 }' 00:15:45.684 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.684 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.684 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.684 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.684 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:45.684 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.684 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.684 [2024-11-15 10:44:16.240870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.942 [2024-11-15 10:44:16.285625] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:45.942 [2024-11-15 10:44:16.285703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.942 [2024-11-15 10:44:16.285726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.942 [2024-11-15 10:44:16.285743] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.943 "name": "raid_bdev1", 00:15:45.943 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:45.943 "strip_size_kb": 0, 00:15:45.943 "state": "online", 00:15:45.943 "raid_level": "raid1", 00:15:45.943 "superblock": true, 00:15:45.943 "num_base_bdevs": 2, 00:15:45.943 "num_base_bdevs_discovered": 1, 00:15:45.943 "num_base_bdevs_operational": 1, 00:15:45.943 "base_bdevs_list": [ 00:15:45.943 { 00:15:45.943 "name": null, 00:15:45.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.943 "is_configured": false, 00:15:45.943 "data_offset": 0, 00:15:45.943 "data_size": 63488 00:15:45.943 }, 00:15:45.943 { 00:15:45.943 "name": "BaseBdev2", 00:15:45.943 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:45.943 "is_configured": true, 00:15:45.943 "data_offset": 2048, 00:15:45.943 "data_size": 63488 00:15:45.943 } 00:15:45.943 ] 00:15:45.943 }' 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.943 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.510 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:46.510 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.510 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.510 [2024-11-15 10:44:16.843222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:46.510 [2024-11-15 10:44:16.843302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.510 [2024-11-15 10:44:16.843333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:46.510 [2024-11-15 10:44:16.843366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.510 [2024-11-15 10:44:16.843954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.510 [2024-11-15 10:44:16.843996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:46.510 [2024-11-15 10:44:16.844124] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:46.510 [2024-11-15 10:44:16.844151] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.510 [2024-11-15 10:44:16.844165] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:46.510 [2024-11-15 10:44:16.844207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.510 [2024-11-15 10:44:16.858320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:46.510 spare 00:15:46.510 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.510 10:44:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:46.510 [2024-11-15 10:44:16.860639] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.445 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.445 "name": "raid_bdev1", 00:15:47.445 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:47.445 "strip_size_kb": 0, 00:15:47.445 "state": "online", 00:15:47.445 "raid_level": "raid1", 00:15:47.445 "superblock": true, 00:15:47.445 "num_base_bdevs": 2, 00:15:47.445 "num_base_bdevs_discovered": 2, 00:15:47.445 "num_base_bdevs_operational": 2, 00:15:47.445 "process": { 00:15:47.445 "type": "rebuild", 00:15:47.445 "target": "spare", 00:15:47.445 "progress": { 00:15:47.445 "blocks": 20480, 00:15:47.445 "percent": 32 00:15:47.445 } 00:15:47.445 }, 00:15:47.445 "base_bdevs_list": [ 00:15:47.445 { 00:15:47.445 "name": "spare", 00:15:47.445 "uuid": "b00e4fe9-cf8b-5326-8fa3-9786d8de7c0c", 00:15:47.445 "is_configured": true, 00:15:47.445 "data_offset": 2048, 00:15:47.445 "data_size": 63488 00:15:47.445 }, 00:15:47.445 { 00:15:47.445 "name": "BaseBdev2", 00:15:47.446 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:47.446 "is_configured": true, 00:15:47.446 "data_offset": 2048, 00:15:47.446 "data_size": 63488 00:15:47.446 } 00:15:47.446 ] 00:15:47.446 }' 00:15:47.446 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.446 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.446 10:44:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.711 [2024-11-15 10:44:18.022244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.711 [2024-11-15 10:44:18.067137] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:47.711 [2024-11-15 10:44:18.067241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.711 [2024-11-15 10:44:18.067274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.711 [2024-11-15 10:44:18.067286] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.711 "name": "raid_bdev1", 00:15:47.711 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:47.711 "strip_size_kb": 0, 00:15:47.711 "state": "online", 00:15:47.711 "raid_level": "raid1", 00:15:47.711 "superblock": true, 00:15:47.711 "num_base_bdevs": 2, 00:15:47.711 "num_base_bdevs_discovered": 1, 00:15:47.711 "num_base_bdevs_operational": 1, 00:15:47.711 "base_bdevs_list": [ 00:15:47.711 { 00:15:47.711 "name": null, 00:15:47.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.711 "is_configured": false, 00:15:47.711 "data_offset": 0, 00:15:47.711 "data_size": 63488 00:15:47.711 }, 00:15:47.711 { 00:15:47.711 "name": "BaseBdev2", 00:15:47.711 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:47.711 "is_configured": true, 00:15:47.711 "data_offset": 2048, 00:15:47.711 "data_size": 63488 00:15:47.711 } 00:15:47.711 ] 00:15:47.711 }' 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.711 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.279 "name": "raid_bdev1", 00:15:48.279 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:48.279 "strip_size_kb": 0, 00:15:48.279 "state": "online", 00:15:48.279 "raid_level": "raid1", 00:15:48.279 "superblock": true, 00:15:48.279 "num_base_bdevs": 2, 00:15:48.279 "num_base_bdevs_discovered": 1, 00:15:48.279 "num_base_bdevs_operational": 1, 00:15:48.279 "base_bdevs_list": [ 00:15:48.279 { 00:15:48.279 "name": null, 00:15:48.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.279 "is_configured": false, 00:15:48.279 "data_offset": 0, 00:15:48.279 "data_size": 63488 00:15:48.279 }, 00:15:48.279 { 00:15:48.279 "name": "BaseBdev2", 00:15:48.279 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:48.279 "is_configured": true, 00:15:48.279 "data_offset": 2048, 00:15:48.279 "data_size": 63488 00:15:48.279 } 00:15:48.279 ] 00:15:48.279 }' 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.279 [2024-11-15 10:44:18.808632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:48.279 [2024-11-15 10:44:18.808694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.279 [2024-11-15 10:44:18.808732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:48.279 [2024-11-15 10:44:18.808750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.279 [2024-11-15 10:44:18.809269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.279 [2024-11-15 10:44:18.809313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.279 [2024-11-15 10:44:18.809433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:48.279 [2024-11-15 10:44:18.809454] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:48.279 [2024-11-15 10:44:18.809468] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:48.279 [2024-11-15 10:44:18.809481] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:48.279 BaseBdev1 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.279 10:44:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.656 "name": "raid_bdev1", 00:15:49.656 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:49.656 "strip_size_kb": 0, 00:15:49.656 "state": "online", 00:15:49.656 "raid_level": "raid1", 00:15:49.656 "superblock": true, 00:15:49.656 "num_base_bdevs": 2, 00:15:49.656 "num_base_bdevs_discovered": 1, 00:15:49.656 "num_base_bdevs_operational": 1, 00:15:49.656 "base_bdevs_list": [ 00:15:49.656 { 00:15:49.656 "name": null, 00:15:49.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.656 "is_configured": false, 00:15:49.656 "data_offset": 0, 00:15:49.656 "data_size": 63488 00:15:49.656 }, 00:15:49.656 { 00:15:49.656 "name": "BaseBdev2", 00:15:49.656 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:49.656 "is_configured": true, 00:15:49.656 "data_offset": 2048, 00:15:49.656 "data_size": 63488 00:15:49.656 } 00:15:49.656 ] 00:15:49.656 }' 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.656 10:44:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.915 "name": "raid_bdev1", 00:15:49.915 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:49.915 "strip_size_kb": 0, 00:15:49.915 "state": "online", 00:15:49.915 "raid_level": "raid1", 00:15:49.915 "superblock": true, 00:15:49.915 "num_base_bdevs": 2, 00:15:49.915 "num_base_bdevs_discovered": 1, 00:15:49.915 "num_base_bdevs_operational": 1, 00:15:49.915 "base_bdevs_list": [ 00:15:49.915 { 00:15:49.915 "name": null, 00:15:49.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.915 "is_configured": false, 00:15:49.915 "data_offset": 0, 00:15:49.915 "data_size": 63488 00:15:49.915 }, 00:15:49.915 { 00:15:49.915 "name": "BaseBdev2", 00:15:49.915 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:49.915 "is_configured": true, 00:15:49.915 "data_offset": 2048, 00:15:49.915 "data_size": 63488 00:15:49.915 } 00:15:49.915 ] 00:15:49.915 }' 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.915 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.174 [2024-11-15 10:44:20.489449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.174 [2024-11-15 10:44:20.489774] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.174 [2024-11-15 10:44:20.489811] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:50.174 request: 00:15:50.174 { 00:15:50.174 "base_bdev": "BaseBdev1", 00:15:50.174 "raid_bdev": "raid_bdev1", 00:15:50.174 "method": "bdev_raid_add_base_bdev", 00:15:50.174 "req_id": 1 00:15:50.174 } 00:15:50.174 Got JSON-RPC error response 00:15:50.174 response: 00:15:50.174 { 00:15:50.174 "code": -22, 00:15:50.174 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:50.174 } 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:50.174 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:50.175 10:44:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:51.111 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.111 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.111 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.112 "name": "raid_bdev1", 00:15:51.112 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:51.112 "strip_size_kb": 0, 00:15:51.112 "state": "online", 00:15:51.112 "raid_level": "raid1", 00:15:51.112 "superblock": true, 00:15:51.112 "num_base_bdevs": 2, 00:15:51.112 "num_base_bdevs_discovered": 1, 00:15:51.112 "num_base_bdevs_operational": 1, 00:15:51.112 "base_bdevs_list": [ 00:15:51.112 { 00:15:51.112 "name": null, 00:15:51.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.112 "is_configured": false, 00:15:51.112 "data_offset": 0, 00:15:51.112 "data_size": 63488 00:15:51.112 }, 00:15:51.112 { 00:15:51.112 "name": "BaseBdev2", 00:15:51.112 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:51.112 "is_configured": true, 00:15:51.112 "data_offset": 2048, 00:15:51.112 "data_size": 63488 00:15:51.112 } 00:15:51.112 ] 00:15:51.112 }' 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.112 10:44:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.679 "name": "raid_bdev1", 00:15:51.679 "uuid": "e496aedb-6b7d-4c86-aeb2-1fe849510641", 00:15:51.679 "strip_size_kb": 0, 00:15:51.679 "state": "online", 00:15:51.679 "raid_level": "raid1", 00:15:51.679 "superblock": true, 00:15:51.679 "num_base_bdevs": 2, 00:15:51.679 "num_base_bdevs_discovered": 1, 00:15:51.679 "num_base_bdevs_operational": 1, 00:15:51.679 "base_bdevs_list": [ 00:15:51.679 { 00:15:51.679 "name": null, 00:15:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.679 "is_configured": false, 00:15:51.679 "data_offset": 0, 00:15:51.679 "data_size": 63488 00:15:51.679 }, 00:15:51.679 { 00:15:51.679 "name": "BaseBdev2", 00:15:51.679 "uuid": "b8e32b5d-e067-554e-9559-cbb4c4698d62", 00:15:51.679 "is_configured": true, 00:15:51.679 "data_offset": 2048, 00:15:51.679 "data_size": 63488 00:15:51.679 } 00:15:51.679 ] 00:15:51.679 }' 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77231 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77231 ']' 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77231 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77231 00:15:51.679 killing process with pid 77231 00:15:51.679 Received shutdown signal, test time was about 18.181829 seconds 00:15:51.679 00:15:51.679 Latency(us) 00:15:51.679 [2024-11-15T10:44:22.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.679 [2024-11-15T10:44:22.239Z] =================================================================================================================== 00:15:51.679 [2024-11-15T10:44:22.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77231' 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77231 00:15:51.679 [2024-11-15 10:44:22.201112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.679 10:44:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77231 00:15:51.679 [2024-11-15 10:44:22.201271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.679 [2024-11-15 10:44:22.201339] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.679 [2024-11-15 10:44:22.201378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:51.937 [2024-11-15 10:44:22.397247] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:53.312 00:15:53.312 real 0m21.459s 00:15:53.312 user 0m29.388s 00:15:53.312 sys 0m1.830s 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.312 ************************************ 00:15:53.312 END TEST raid_rebuild_test_sb_io 00:15:53.312 ************************************ 00:15:53.312 10:44:23 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:53.312 10:44:23 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:53.312 10:44:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:53.312 10:44:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:53.312 10:44:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.312 ************************************ 00:15:53.312 START TEST raid_rebuild_test 00:15:53.312 ************************************ 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:53.312 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77927 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77927 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77927 ']' 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:53.313 10:44:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.313 [2024-11-15 10:44:23.597787] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:15:53.313 [2024-11-15 10:44:23.598109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:53.313 Zero copy mechanism will not be used. 00:15:53.313 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77927 ] 00:15:53.313 [2024-11-15 10:44:23.775225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.571 [2024-11-15 10:44:23.900021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.571 [2024-11-15 10:44:24.105306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.571 [2024-11-15 10:44:24.105574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.138 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:54.138 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.139 BaseBdev1_malloc 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.139 [2024-11-15 10:44:24.632738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:54.139 [2024-11-15 10:44:24.632966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.139 [2024-11-15 10:44:24.633073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:54.139 [2024-11-15 10:44:24.633277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.139 [2024-11-15 10:44:24.636114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.139 BaseBdev1 00:15:54.139 [2024-11-15 10:44:24.636300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.139 BaseBdev2_malloc 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.139 [2024-11-15 10:44:24.680118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:54.139 [2024-11-15 10:44:24.680374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.139 [2024-11-15 10:44:24.680583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:54.139 [2024-11-15 10:44:24.680798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.139 [2024-11-15 10:44:24.683537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.139 [2024-11-15 10:44:24.683592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:54.139 BaseBdev2 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.139 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 BaseBdev3_malloc 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 [2024-11-15 10:44:24.746049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:54.405 [2024-11-15 10:44:24.746252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.405 [2024-11-15 10:44:24.746457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:54.405 [2024-11-15 10:44:24.746652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.405 [2024-11-15 10:44:24.749506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.405 [2024-11-15 10:44:24.749560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.405 BaseBdev3 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 BaseBdev4_malloc 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 [2024-11-15 10:44:24.794060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:54.405 [2024-11-15 10:44:24.794268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.405 [2024-11-15 10:44:24.794477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:54.405 [2024-11-15 10:44:24.794652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.405 [2024-11-15 10:44:24.797517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.405 [2024-11-15 10:44:24.797694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:54.405 BaseBdev4 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 spare_malloc 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 spare_delay 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 [2024-11-15 10:44:24.849960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.405 [2024-11-15 10:44:24.850036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.405 [2024-11-15 10:44:24.850075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:54.405 [2024-11-15 10:44:24.850103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.405 [2024-11-15 10:44:24.852844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.405 spare 00:15:54.405 [2024-11-15 10:44:24.853025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 [2024-11-15 10:44:24.858075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.405 [2024-11-15 10:44:24.860380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.405 [2024-11-15 10:44:24.860472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.405 [2024-11-15 10:44:24.860557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:54.405 [2024-11-15 10:44:24.860674] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:54.405 [2024-11-15 10:44:24.860699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:54.405 [2024-11-15 10:44:24.861025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:54.405 [2024-11-15 10:44:24.861267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:54.405 [2024-11-15 10:44:24.861287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:54.405 [2024-11-15 10:44:24.861517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.405 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.406 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.406 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.406 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.406 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.406 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.406 "name": "raid_bdev1", 00:15:54.406 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:15:54.406 "strip_size_kb": 0, 00:15:54.406 "state": "online", 00:15:54.406 "raid_level": "raid1", 00:15:54.406 "superblock": false, 00:15:54.406 "num_base_bdevs": 4, 00:15:54.406 "num_base_bdevs_discovered": 4, 00:15:54.406 "num_base_bdevs_operational": 4, 00:15:54.406 "base_bdevs_list": [ 00:15:54.406 { 00:15:54.406 "name": "BaseBdev1", 00:15:54.406 "uuid": "f6e7908d-4030-597a-a24a-b8889febe092", 00:15:54.406 "is_configured": true, 00:15:54.406 "data_offset": 0, 00:15:54.406 "data_size": 65536 00:15:54.406 }, 00:15:54.406 { 00:15:54.406 "name": "BaseBdev2", 00:15:54.406 "uuid": "2541149d-b409-520c-b4e3-ac0e1ab993af", 00:15:54.406 "is_configured": true, 00:15:54.406 "data_offset": 0, 00:15:54.406 "data_size": 65536 00:15:54.406 }, 00:15:54.406 { 00:15:54.406 "name": "BaseBdev3", 00:15:54.406 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:15:54.406 "is_configured": true, 00:15:54.406 "data_offset": 0, 00:15:54.406 "data_size": 65536 00:15:54.406 }, 00:15:54.406 { 00:15:54.406 "name": "BaseBdev4", 00:15:54.406 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:15:54.406 "is_configured": true, 00:15:54.406 "data_offset": 0, 00:15:54.406 "data_size": 65536 00:15:54.406 } 00:15:54.406 ] 00:15:54.406 }' 00:15:54.406 10:44:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.406 10:44:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.974 [2024-11-15 10:44:25.398664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:54.974 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:54.975 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:55.246 [2024-11-15 10:44:25.798430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:55.508 /dev/nbd0 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.508 1+0 records in 00:15:55.508 1+0 records out 00:15:55.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332985 s, 12.3 MB/s 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:55.508 10:44:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:05.481 65536+0 records in 00:16:05.481 65536+0 records out 00:16:05.481 33554432 bytes (34 MB, 32 MiB) copied, 8.52429 s, 3.9 MB/s 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:05.481 [2024-11-15 10:44:34.674294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.481 [2024-11-15 10:44:34.706398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.481 "name": "raid_bdev1", 00:16:05.481 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:05.481 "strip_size_kb": 0, 00:16:05.481 "state": "online", 00:16:05.481 "raid_level": "raid1", 00:16:05.481 "superblock": false, 00:16:05.481 "num_base_bdevs": 4, 00:16:05.481 "num_base_bdevs_discovered": 3, 00:16:05.481 "num_base_bdevs_operational": 3, 00:16:05.481 "base_bdevs_list": [ 00:16:05.481 { 00:16:05.481 "name": null, 00:16:05.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.481 "is_configured": false, 00:16:05.481 "data_offset": 0, 00:16:05.481 "data_size": 65536 00:16:05.481 }, 00:16:05.481 { 00:16:05.481 "name": "BaseBdev2", 00:16:05.481 "uuid": "2541149d-b409-520c-b4e3-ac0e1ab993af", 00:16:05.481 "is_configured": true, 00:16:05.481 "data_offset": 0, 00:16:05.481 "data_size": 65536 00:16:05.481 }, 00:16:05.481 { 00:16:05.481 "name": "BaseBdev3", 00:16:05.481 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:05.481 "is_configured": true, 00:16:05.481 "data_offset": 0, 00:16:05.481 "data_size": 65536 00:16:05.481 }, 00:16:05.481 { 00:16:05.481 "name": "BaseBdev4", 00:16:05.481 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:05.481 "is_configured": true, 00:16:05.481 "data_offset": 0, 00:16:05.481 "data_size": 65536 00:16:05.481 } 00:16:05.481 ] 00:16:05.481 }' 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.481 10:44:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.481 10:44:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.481 10:44:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.481 10:44:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.481 [2024-11-15 10:44:35.202536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.481 [2024-11-15 10:44:35.216842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:16:05.481 10:44:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.481 10:44:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:05.481 [2024-11-15 10:44:35.219259] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.740 "name": "raid_bdev1", 00:16:05.740 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:05.740 "strip_size_kb": 0, 00:16:05.740 "state": "online", 00:16:05.740 "raid_level": "raid1", 00:16:05.740 "superblock": false, 00:16:05.740 "num_base_bdevs": 4, 00:16:05.740 "num_base_bdevs_discovered": 4, 00:16:05.740 "num_base_bdevs_operational": 4, 00:16:05.740 "process": { 00:16:05.740 "type": "rebuild", 00:16:05.740 "target": "spare", 00:16:05.740 "progress": { 00:16:05.740 "blocks": 20480, 00:16:05.740 "percent": 31 00:16:05.740 } 00:16:05.740 }, 00:16:05.740 "base_bdevs_list": [ 00:16:05.740 { 00:16:05.740 "name": "spare", 00:16:05.740 "uuid": "689e6f00-9fc3-533b-a6e1-2901cb5cf73e", 00:16:05.740 "is_configured": true, 00:16:05.740 "data_offset": 0, 00:16:05.740 "data_size": 65536 00:16:05.740 }, 00:16:05.740 { 00:16:05.740 "name": "BaseBdev2", 00:16:05.740 "uuid": "2541149d-b409-520c-b4e3-ac0e1ab993af", 00:16:05.740 "is_configured": true, 00:16:05.740 "data_offset": 0, 00:16:05.740 "data_size": 65536 00:16:05.740 }, 00:16:05.740 { 00:16:05.740 "name": "BaseBdev3", 00:16:05.740 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:05.740 "is_configured": true, 00:16:05.740 "data_offset": 0, 00:16:05.740 "data_size": 65536 00:16:05.740 }, 00:16:05.740 { 00:16:05.740 "name": "BaseBdev4", 00:16:05.740 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:05.740 "is_configured": true, 00:16:05.740 "data_offset": 0, 00:16:05.740 "data_size": 65536 00:16:05.740 } 00:16:05.740 ] 00:16:05.740 }' 00:16:05.740 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.999 [2024-11-15 10:44:36.368874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.999 [2024-11-15 10:44:36.426305] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:05.999 [2024-11-15 10:44:36.426425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.999 [2024-11-15 10:44:36.426452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.999 [2024-11-15 10:44:36.426472] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.999 "name": "raid_bdev1", 00:16:05.999 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:05.999 "strip_size_kb": 0, 00:16:05.999 "state": "online", 00:16:05.999 "raid_level": "raid1", 00:16:05.999 "superblock": false, 00:16:05.999 "num_base_bdevs": 4, 00:16:05.999 "num_base_bdevs_discovered": 3, 00:16:05.999 "num_base_bdevs_operational": 3, 00:16:05.999 "base_bdevs_list": [ 00:16:05.999 { 00:16:05.999 "name": null, 00:16:05.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.999 "is_configured": false, 00:16:05.999 "data_offset": 0, 00:16:05.999 "data_size": 65536 00:16:05.999 }, 00:16:05.999 { 00:16:05.999 "name": "BaseBdev2", 00:16:05.999 "uuid": "2541149d-b409-520c-b4e3-ac0e1ab993af", 00:16:05.999 "is_configured": true, 00:16:05.999 "data_offset": 0, 00:16:05.999 "data_size": 65536 00:16:05.999 }, 00:16:05.999 { 00:16:05.999 "name": "BaseBdev3", 00:16:05.999 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:05.999 "is_configured": true, 00:16:05.999 "data_offset": 0, 00:16:05.999 "data_size": 65536 00:16:05.999 }, 00:16:05.999 { 00:16:05.999 "name": "BaseBdev4", 00:16:05.999 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:05.999 "is_configured": true, 00:16:05.999 "data_offset": 0, 00:16:05.999 "data_size": 65536 00:16:05.999 } 00:16:05.999 ] 00:16:05.999 }' 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.999 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.567 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.567 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.567 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.567 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.567 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.567 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.567 10:44:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.567 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.567 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.567 10:44:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.567 10:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.567 "name": "raid_bdev1", 00:16:06.567 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:06.567 "strip_size_kb": 0, 00:16:06.567 "state": "online", 00:16:06.567 "raid_level": "raid1", 00:16:06.567 "superblock": false, 00:16:06.567 "num_base_bdevs": 4, 00:16:06.567 "num_base_bdevs_discovered": 3, 00:16:06.567 "num_base_bdevs_operational": 3, 00:16:06.567 "base_bdevs_list": [ 00:16:06.567 { 00:16:06.567 "name": null, 00:16:06.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.567 "is_configured": false, 00:16:06.567 "data_offset": 0, 00:16:06.567 "data_size": 65536 00:16:06.567 }, 00:16:06.567 { 00:16:06.567 "name": "BaseBdev2", 00:16:06.567 "uuid": "2541149d-b409-520c-b4e3-ac0e1ab993af", 00:16:06.567 "is_configured": true, 00:16:06.567 "data_offset": 0, 00:16:06.567 "data_size": 65536 00:16:06.567 }, 00:16:06.567 { 00:16:06.567 "name": "BaseBdev3", 00:16:06.567 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:06.567 "is_configured": true, 00:16:06.567 "data_offset": 0, 00:16:06.567 "data_size": 65536 00:16:06.567 }, 00:16:06.567 { 00:16:06.567 "name": "BaseBdev4", 00:16:06.567 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:06.567 "is_configured": true, 00:16:06.567 "data_offset": 0, 00:16:06.567 "data_size": 65536 00:16:06.567 } 00:16:06.567 ] 00:16:06.567 }' 00:16:06.567 10:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.567 10:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.567 10:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.827 10:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.827 10:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:06.827 10:44:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.827 10:44:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.827 [2024-11-15 10:44:37.137041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.827 [2024-11-15 10:44:37.150178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:16:06.827 10:44:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.827 10:44:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:06.827 [2024-11-15 10:44:37.152561] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.764 "name": "raid_bdev1", 00:16:07.764 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:07.764 "strip_size_kb": 0, 00:16:07.764 "state": "online", 00:16:07.764 "raid_level": "raid1", 00:16:07.764 "superblock": false, 00:16:07.764 "num_base_bdevs": 4, 00:16:07.764 "num_base_bdevs_discovered": 4, 00:16:07.764 "num_base_bdevs_operational": 4, 00:16:07.764 "process": { 00:16:07.764 "type": "rebuild", 00:16:07.764 "target": "spare", 00:16:07.764 "progress": { 00:16:07.764 "blocks": 20480, 00:16:07.764 "percent": 31 00:16:07.764 } 00:16:07.764 }, 00:16:07.764 "base_bdevs_list": [ 00:16:07.764 { 00:16:07.764 "name": "spare", 00:16:07.764 "uuid": "689e6f00-9fc3-533b-a6e1-2901cb5cf73e", 00:16:07.764 "is_configured": true, 00:16:07.764 "data_offset": 0, 00:16:07.764 "data_size": 65536 00:16:07.764 }, 00:16:07.764 { 00:16:07.764 "name": "BaseBdev2", 00:16:07.764 "uuid": "2541149d-b409-520c-b4e3-ac0e1ab993af", 00:16:07.764 "is_configured": true, 00:16:07.764 "data_offset": 0, 00:16:07.764 "data_size": 65536 00:16:07.764 }, 00:16:07.764 { 00:16:07.764 "name": "BaseBdev3", 00:16:07.764 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:07.764 "is_configured": true, 00:16:07.764 "data_offset": 0, 00:16:07.764 "data_size": 65536 00:16:07.764 }, 00:16:07.764 { 00:16:07.764 "name": "BaseBdev4", 00:16:07.764 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:07.764 "is_configured": true, 00:16:07.764 "data_offset": 0, 00:16:07.764 "data_size": 65536 00:16:07.764 } 00:16:07.764 ] 00:16:07.764 }' 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.764 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.023 [2024-11-15 10:44:38.322249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.023 [2024-11-15 10:44:38.359243] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.023 "name": "raid_bdev1", 00:16:08.023 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:08.023 "strip_size_kb": 0, 00:16:08.023 "state": "online", 00:16:08.023 "raid_level": "raid1", 00:16:08.023 "superblock": false, 00:16:08.023 "num_base_bdevs": 4, 00:16:08.023 "num_base_bdevs_discovered": 3, 00:16:08.023 "num_base_bdevs_operational": 3, 00:16:08.023 "process": { 00:16:08.023 "type": "rebuild", 00:16:08.023 "target": "spare", 00:16:08.023 "progress": { 00:16:08.023 "blocks": 24576, 00:16:08.023 "percent": 37 00:16:08.023 } 00:16:08.023 }, 00:16:08.023 "base_bdevs_list": [ 00:16:08.023 { 00:16:08.023 "name": "spare", 00:16:08.023 "uuid": "689e6f00-9fc3-533b-a6e1-2901cb5cf73e", 00:16:08.023 "is_configured": true, 00:16:08.023 "data_offset": 0, 00:16:08.023 "data_size": 65536 00:16:08.023 }, 00:16:08.023 { 00:16:08.023 "name": null, 00:16:08.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.023 "is_configured": false, 00:16:08.023 "data_offset": 0, 00:16:08.023 "data_size": 65536 00:16:08.023 }, 00:16:08.023 { 00:16:08.023 "name": "BaseBdev3", 00:16:08.023 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:08.023 "is_configured": true, 00:16:08.023 "data_offset": 0, 00:16:08.023 "data_size": 65536 00:16:08.023 }, 00:16:08.023 { 00:16:08.023 "name": "BaseBdev4", 00:16:08.023 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:08.023 "is_configured": true, 00:16:08.023 "data_offset": 0, 00:16:08.023 "data_size": 65536 00:16:08.023 } 00:16:08.023 ] 00:16:08.023 }' 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=472 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.023 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.024 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.024 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.024 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.024 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.024 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.024 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.024 10:44:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.282 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.282 "name": "raid_bdev1", 00:16:08.282 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:08.282 "strip_size_kb": 0, 00:16:08.282 "state": "online", 00:16:08.282 "raid_level": "raid1", 00:16:08.282 "superblock": false, 00:16:08.282 "num_base_bdevs": 4, 00:16:08.282 "num_base_bdevs_discovered": 3, 00:16:08.282 "num_base_bdevs_operational": 3, 00:16:08.282 "process": { 00:16:08.282 "type": "rebuild", 00:16:08.282 "target": "spare", 00:16:08.282 "progress": { 00:16:08.282 "blocks": 26624, 00:16:08.282 "percent": 40 00:16:08.282 } 00:16:08.282 }, 00:16:08.282 "base_bdevs_list": [ 00:16:08.282 { 00:16:08.282 "name": "spare", 00:16:08.282 "uuid": "689e6f00-9fc3-533b-a6e1-2901cb5cf73e", 00:16:08.282 "is_configured": true, 00:16:08.282 "data_offset": 0, 00:16:08.283 "data_size": 65536 00:16:08.283 }, 00:16:08.283 { 00:16:08.283 "name": null, 00:16:08.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.283 "is_configured": false, 00:16:08.283 "data_offset": 0, 00:16:08.283 "data_size": 65536 00:16:08.283 }, 00:16:08.283 { 00:16:08.283 "name": "BaseBdev3", 00:16:08.283 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:08.283 "is_configured": true, 00:16:08.283 "data_offset": 0, 00:16:08.283 "data_size": 65536 00:16:08.283 }, 00:16:08.283 { 00:16:08.283 "name": "BaseBdev4", 00:16:08.283 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:08.283 "is_configured": true, 00:16:08.283 "data_offset": 0, 00:16:08.283 "data_size": 65536 00:16:08.283 } 00:16:08.283 ] 00:16:08.283 }' 00:16:08.283 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.283 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.283 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.283 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.283 10:44:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.218 "name": "raid_bdev1", 00:16:09.218 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:09.218 "strip_size_kb": 0, 00:16:09.218 "state": "online", 00:16:09.218 "raid_level": "raid1", 00:16:09.218 "superblock": false, 00:16:09.218 "num_base_bdevs": 4, 00:16:09.218 "num_base_bdevs_discovered": 3, 00:16:09.218 "num_base_bdevs_operational": 3, 00:16:09.218 "process": { 00:16:09.218 "type": "rebuild", 00:16:09.218 "target": "spare", 00:16:09.218 "progress": { 00:16:09.218 "blocks": 51200, 00:16:09.218 "percent": 78 00:16:09.218 } 00:16:09.218 }, 00:16:09.218 "base_bdevs_list": [ 00:16:09.218 { 00:16:09.218 "name": "spare", 00:16:09.218 "uuid": "689e6f00-9fc3-533b-a6e1-2901cb5cf73e", 00:16:09.218 "is_configured": true, 00:16:09.218 "data_offset": 0, 00:16:09.218 "data_size": 65536 00:16:09.218 }, 00:16:09.218 { 00:16:09.218 "name": null, 00:16:09.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.218 "is_configured": false, 00:16:09.218 "data_offset": 0, 00:16:09.218 "data_size": 65536 00:16:09.218 }, 00:16:09.218 { 00:16:09.218 "name": "BaseBdev3", 00:16:09.218 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:09.218 "is_configured": true, 00:16:09.218 "data_offset": 0, 00:16:09.218 "data_size": 65536 00:16:09.218 }, 00:16:09.218 { 00:16:09.218 "name": "BaseBdev4", 00:16:09.218 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:09.218 "is_configured": true, 00:16:09.218 "data_offset": 0, 00:16:09.218 "data_size": 65536 00:16:09.218 } 00:16:09.218 ] 00:16:09.218 }' 00:16:09.218 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.477 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.477 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.477 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.477 10:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.044 [2024-11-15 10:44:40.370996] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:10.044 [2024-11-15 10:44:40.371110] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:10.044 [2024-11-15 10:44:40.371189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.303 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.303 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.303 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.303 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.303 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.303 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.303 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.303 10:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.303 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.303 10:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.561 10:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.561 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.561 "name": "raid_bdev1", 00:16:10.561 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:10.561 "strip_size_kb": 0, 00:16:10.561 "state": "online", 00:16:10.561 "raid_level": "raid1", 00:16:10.561 "superblock": false, 00:16:10.561 "num_base_bdevs": 4, 00:16:10.561 "num_base_bdevs_discovered": 3, 00:16:10.561 "num_base_bdevs_operational": 3, 00:16:10.561 "base_bdevs_list": [ 00:16:10.561 { 00:16:10.561 "name": "spare", 00:16:10.561 "uuid": "689e6f00-9fc3-533b-a6e1-2901cb5cf73e", 00:16:10.561 "is_configured": true, 00:16:10.561 "data_offset": 0, 00:16:10.561 "data_size": 65536 00:16:10.561 }, 00:16:10.561 { 00:16:10.561 "name": null, 00:16:10.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.562 "is_configured": false, 00:16:10.562 "data_offset": 0, 00:16:10.562 "data_size": 65536 00:16:10.562 }, 00:16:10.562 { 00:16:10.562 "name": "BaseBdev3", 00:16:10.562 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:10.562 "is_configured": true, 00:16:10.562 "data_offset": 0, 00:16:10.562 "data_size": 65536 00:16:10.562 }, 00:16:10.562 { 00:16:10.562 "name": "BaseBdev4", 00:16:10.562 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:10.562 "is_configured": true, 00:16:10.562 "data_offset": 0, 00:16:10.562 "data_size": 65536 00:16:10.562 } 00:16:10.562 ] 00:16:10.562 }' 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.562 10:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.562 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.562 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.562 "name": "raid_bdev1", 00:16:10.562 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:10.562 "strip_size_kb": 0, 00:16:10.562 "state": "online", 00:16:10.562 "raid_level": "raid1", 00:16:10.562 "superblock": false, 00:16:10.562 "num_base_bdevs": 4, 00:16:10.562 "num_base_bdevs_discovered": 3, 00:16:10.562 "num_base_bdevs_operational": 3, 00:16:10.562 "base_bdevs_list": [ 00:16:10.562 { 00:16:10.562 "name": "spare", 00:16:10.562 "uuid": "689e6f00-9fc3-533b-a6e1-2901cb5cf73e", 00:16:10.562 "is_configured": true, 00:16:10.562 "data_offset": 0, 00:16:10.562 "data_size": 65536 00:16:10.562 }, 00:16:10.562 { 00:16:10.562 "name": null, 00:16:10.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.562 "is_configured": false, 00:16:10.562 "data_offset": 0, 00:16:10.562 "data_size": 65536 00:16:10.562 }, 00:16:10.562 { 00:16:10.562 "name": "BaseBdev3", 00:16:10.562 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:10.562 "is_configured": true, 00:16:10.562 "data_offset": 0, 00:16:10.562 "data_size": 65536 00:16:10.562 }, 00:16:10.562 { 00:16:10.562 "name": "BaseBdev4", 00:16:10.562 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:10.562 "is_configured": true, 00:16:10.562 "data_offset": 0, 00:16:10.562 "data_size": 65536 00:16:10.562 } 00:16:10.562 ] 00:16:10.562 }' 00:16:10.562 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.562 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.562 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.850 "name": "raid_bdev1", 00:16:10.850 "uuid": "c031acc9-cc84-4739-9996-798bf2e48c7b", 00:16:10.850 "strip_size_kb": 0, 00:16:10.850 "state": "online", 00:16:10.850 "raid_level": "raid1", 00:16:10.850 "superblock": false, 00:16:10.850 "num_base_bdevs": 4, 00:16:10.850 "num_base_bdevs_discovered": 3, 00:16:10.850 "num_base_bdevs_operational": 3, 00:16:10.850 "base_bdevs_list": [ 00:16:10.850 { 00:16:10.850 "name": "spare", 00:16:10.850 "uuid": "689e6f00-9fc3-533b-a6e1-2901cb5cf73e", 00:16:10.850 "is_configured": true, 00:16:10.850 "data_offset": 0, 00:16:10.850 "data_size": 65536 00:16:10.850 }, 00:16:10.850 { 00:16:10.850 "name": null, 00:16:10.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.850 "is_configured": false, 00:16:10.850 "data_offset": 0, 00:16:10.850 "data_size": 65536 00:16:10.850 }, 00:16:10.850 { 00:16:10.850 "name": "BaseBdev3", 00:16:10.850 "uuid": "7b3073c0-baa3-53e2-9ade-c380ea53110a", 00:16:10.850 "is_configured": true, 00:16:10.850 "data_offset": 0, 00:16:10.850 "data_size": 65536 00:16:10.850 }, 00:16:10.850 { 00:16:10.850 "name": "BaseBdev4", 00:16:10.850 "uuid": "eba3df51-754b-5957-95a5-0ae22fd7eaa2", 00:16:10.850 "is_configured": true, 00:16:10.850 "data_offset": 0, 00:16:10.850 "data_size": 65536 00:16:10.850 } 00:16:10.850 ] 00:16:10.850 }' 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.850 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.128 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:11.128 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.128 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.128 [2024-11-15 10:44:41.681816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.128 [2024-11-15 10:44:41.681995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.128 [2024-11-15 10:44:41.682208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.128 [2024-11-15 10:44:41.682456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.128 [2024-11-15 10:44:41.682612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.387 10:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:11.646 /dev/nbd0 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.646 1+0 records in 00:16:11.646 1+0 records out 00:16:11.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561157 s, 7.3 MB/s 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.646 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:11.906 /dev/nbd1 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.906 1+0 records in 00:16:11.906 1+0 records out 00:16:11.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391118 s, 10.5 MB/s 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.906 10:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:12.165 10:44:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:12.165 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.165 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:12.165 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:12.165 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:12.165 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.165 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:12.423 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:12.423 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:12.423 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:12.423 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.423 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.423 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:12.423 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:12.423 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.423 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.423 10:44:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77927 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77927 ']' 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77927 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77927 00:16:12.682 killing process with pid 77927 00:16:12.682 Received shutdown signal, test time was about 60.000000 seconds 00:16:12.682 00:16:12.682 Latency(us) 00:16:12.682 [2024-11-15T10:44:43.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.682 [2024-11-15T10:44:43.242Z] =================================================================================================================== 00:16:12.682 [2024-11-15T10:44:43.242Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77927' 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77927 00:16:12.682 10:44:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77927 00:16:12.682 [2024-11-15 10:44:43.199099] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.251 [2024-11-15 10:44:43.623848] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.187 ************************************ 00:16:14.187 END TEST raid_rebuild_test 00:16:14.187 ************************************ 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:14.187 00:16:14.187 real 0m21.140s 00:16:14.187 user 0m23.791s 00:16:14.187 sys 0m3.360s 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.187 10:44:44 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:16:14.187 10:44:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:14.187 10:44:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.187 10:44:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.187 ************************************ 00:16:14.187 START TEST raid_rebuild_test_sb 00:16:14.187 ************************************ 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.187 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78416 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78416 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78416 ']' 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:14.188 10:44:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.446 [2024-11-15 10:44:44.807896] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:16:14.446 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:14.446 Zero copy mechanism will not be used. 00:16:14.446 [2024-11-15 10:44:44.808292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78416 ] 00:16:14.446 [2024-11-15 10:44:44.994306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.705 [2024-11-15 10:44:45.122819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.964 [2024-11-15 10:44:45.326547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.964 [2024-11-15 10:44:45.326613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.531 BaseBdev1_malloc 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.531 [2024-11-15 10:44:45.881326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:15.531 [2024-11-15 10:44:45.881429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.531 [2024-11-15 10:44:45.881463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:15.531 [2024-11-15 10:44:45.881483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.531 [2024-11-15 10:44:45.884109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.531 [2024-11-15 10:44:45.884312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.531 BaseBdev1 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.531 BaseBdev2_malloc 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.531 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.531 [2024-11-15 10:44:45.931340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:15.531 [2024-11-15 10:44:45.931456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.531 [2024-11-15 10:44:45.931490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:15.531 [2024-11-15 10:44:45.931509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.531 [2024-11-15 10:44:45.934322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.532 [2024-11-15 10:44:45.934384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:15.532 BaseBdev2 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.532 BaseBdev3_malloc 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.532 [2024-11-15 10:44:45.991905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:15.532 [2024-11-15 10:44:45.992181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.532 [2024-11-15 10:44:45.992224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:15.532 [2024-11-15 10:44:45.992244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.532 [2024-11-15 10:44:45.994892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.532 [2024-11-15 10:44:45.994998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:15.532 BaseBdev3 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.532 10:44:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.532 BaseBdev4_malloc 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.532 [2024-11-15 10:44:46.041645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:15.532 [2024-11-15 10:44:46.041771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.532 [2024-11-15 10:44:46.041824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:15.532 [2024-11-15 10:44:46.041858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.532 [2024-11-15 10:44:46.044785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.532 [2024-11-15 10:44:46.044839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:15.532 BaseBdev4 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.532 spare_malloc 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.532 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.791 spare_delay 00:16:15.791 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.791 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:15.791 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.791 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.791 [2024-11-15 10:44:46.097653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:15.791 [2024-11-15 10:44:46.097750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.791 [2024-11-15 10:44:46.097778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:15.791 [2024-11-15 10:44:46.097796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.791 [2024-11-15 10:44:46.100489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.791 [2024-11-15 10:44:46.100538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:15.791 spare 00:16:15.791 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.791 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:15.791 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.791 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.791 [2024-11-15 10:44:46.105720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.791 [2024-11-15 10:44:46.108128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.792 [2024-11-15 10:44:46.108415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.792 [2024-11-15 10:44:46.108512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:15.792 [2024-11-15 10:44:46.108764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:15.792 [2024-11-15 10:44:46.108787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:15.792 [2024-11-15 10:44:46.109101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:15.792 [2024-11-15 10:44:46.109320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:15.792 [2024-11-15 10:44:46.109337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:15.792 [2024-11-15 10:44:46.109541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.792 "name": "raid_bdev1", 00:16:15.792 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:15.792 "strip_size_kb": 0, 00:16:15.792 "state": "online", 00:16:15.792 "raid_level": "raid1", 00:16:15.792 "superblock": true, 00:16:15.792 "num_base_bdevs": 4, 00:16:15.792 "num_base_bdevs_discovered": 4, 00:16:15.792 "num_base_bdevs_operational": 4, 00:16:15.792 "base_bdevs_list": [ 00:16:15.792 { 00:16:15.792 "name": "BaseBdev1", 00:16:15.792 "uuid": "6fc47674-b81a-5b43-ab65-6c3c503b574e", 00:16:15.792 "is_configured": true, 00:16:15.792 "data_offset": 2048, 00:16:15.792 "data_size": 63488 00:16:15.792 }, 00:16:15.792 { 00:16:15.792 "name": "BaseBdev2", 00:16:15.792 "uuid": "6b3bf578-bff0-54be-951b-4e39478725c4", 00:16:15.792 "is_configured": true, 00:16:15.792 "data_offset": 2048, 00:16:15.792 "data_size": 63488 00:16:15.792 }, 00:16:15.792 { 00:16:15.792 "name": "BaseBdev3", 00:16:15.792 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:15.792 "is_configured": true, 00:16:15.792 "data_offset": 2048, 00:16:15.792 "data_size": 63488 00:16:15.792 }, 00:16:15.792 { 00:16:15.792 "name": "BaseBdev4", 00:16:15.792 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:15.792 "is_configured": true, 00:16:15.792 "data_offset": 2048, 00:16:15.792 "data_size": 63488 00:16:15.792 } 00:16:15.792 ] 00:16:15.792 }' 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.792 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.359 [2024-11-15 10:44:46.630232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.359 10:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:16.618 [2024-11-15 10:44:47.030000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:16.618 /dev/nbd0 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.618 1+0 records in 00:16:16.618 1+0 records out 00:16:16.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263653 s, 15.5 MB/s 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:16.618 10:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:26.650 63488+0 records in 00:16:26.651 63488+0 records out 00:16:26.651 32505856 bytes (33 MB, 31 MiB) copied, 8.36233 s, 3.9 MB/s 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:26.651 [2024-11-15 10:44:55.780649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.651 [2024-11-15 10:44:55.792756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.651 "name": "raid_bdev1", 00:16:26.651 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:26.651 "strip_size_kb": 0, 00:16:26.651 "state": "online", 00:16:26.651 "raid_level": "raid1", 00:16:26.651 "superblock": true, 00:16:26.651 "num_base_bdevs": 4, 00:16:26.651 "num_base_bdevs_discovered": 3, 00:16:26.651 "num_base_bdevs_operational": 3, 00:16:26.651 "base_bdevs_list": [ 00:16:26.651 { 00:16:26.651 "name": null, 00:16:26.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.651 "is_configured": false, 00:16:26.651 "data_offset": 0, 00:16:26.651 "data_size": 63488 00:16:26.651 }, 00:16:26.651 { 00:16:26.651 "name": "BaseBdev2", 00:16:26.651 "uuid": "6b3bf578-bff0-54be-951b-4e39478725c4", 00:16:26.651 "is_configured": true, 00:16:26.651 "data_offset": 2048, 00:16:26.651 "data_size": 63488 00:16:26.651 }, 00:16:26.651 { 00:16:26.651 "name": "BaseBdev3", 00:16:26.651 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:26.651 "is_configured": true, 00:16:26.651 "data_offset": 2048, 00:16:26.651 "data_size": 63488 00:16:26.651 }, 00:16:26.651 { 00:16:26.651 "name": "BaseBdev4", 00:16:26.651 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:26.651 "is_configured": true, 00:16:26.651 "data_offset": 2048, 00:16:26.651 "data_size": 63488 00:16:26.651 } 00:16:26.651 ] 00:16:26.651 }' 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.651 10:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.651 10:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:26.651 10:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.651 10:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.651 [2024-11-15 10:44:56.308922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.651 [2024-11-15 10:44:56.323365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:16:26.651 10:44:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.651 10:44:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:26.651 [2024-11-15 10:44:56.325685] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.916 "name": "raid_bdev1", 00:16:26.916 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:26.916 "strip_size_kb": 0, 00:16:26.916 "state": "online", 00:16:26.916 "raid_level": "raid1", 00:16:26.916 "superblock": true, 00:16:26.916 "num_base_bdevs": 4, 00:16:26.916 "num_base_bdevs_discovered": 4, 00:16:26.916 "num_base_bdevs_operational": 4, 00:16:26.916 "process": { 00:16:26.916 "type": "rebuild", 00:16:26.916 "target": "spare", 00:16:26.916 "progress": { 00:16:26.916 "blocks": 20480, 00:16:26.916 "percent": 32 00:16:26.916 } 00:16:26.916 }, 00:16:26.916 "base_bdevs_list": [ 00:16:26.916 { 00:16:26.916 "name": "spare", 00:16:26.916 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:26.916 "is_configured": true, 00:16:26.916 "data_offset": 2048, 00:16:26.916 "data_size": 63488 00:16:26.916 }, 00:16:26.916 { 00:16:26.916 "name": "BaseBdev2", 00:16:26.916 "uuid": "6b3bf578-bff0-54be-951b-4e39478725c4", 00:16:26.916 "is_configured": true, 00:16:26.916 "data_offset": 2048, 00:16:26.916 "data_size": 63488 00:16:26.916 }, 00:16:26.916 { 00:16:26.916 "name": "BaseBdev3", 00:16:26.916 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:26.916 "is_configured": true, 00:16:26.916 "data_offset": 2048, 00:16:26.916 "data_size": 63488 00:16:26.916 }, 00:16:26.916 { 00:16:26.916 "name": "BaseBdev4", 00:16:26.916 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:26.916 "is_configured": true, 00:16:26.916 "data_offset": 2048, 00:16:26.916 "data_size": 63488 00:16:26.916 } 00:16:26.916 ] 00:16:26.916 }' 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.916 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.174 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.174 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:27.174 10:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.174 10:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.174 [2024-11-15 10:44:57.495471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.174 [2024-11-15 10:44:57.532420] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:27.174 [2024-11-15 10:44:57.532700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.174 [2024-11-15 10:44:57.532941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.175 [2024-11-15 10:44:57.533001] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.175 "name": "raid_bdev1", 00:16:27.175 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:27.175 "strip_size_kb": 0, 00:16:27.175 "state": "online", 00:16:27.175 "raid_level": "raid1", 00:16:27.175 "superblock": true, 00:16:27.175 "num_base_bdevs": 4, 00:16:27.175 "num_base_bdevs_discovered": 3, 00:16:27.175 "num_base_bdevs_operational": 3, 00:16:27.175 "base_bdevs_list": [ 00:16:27.175 { 00:16:27.175 "name": null, 00:16:27.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.175 "is_configured": false, 00:16:27.175 "data_offset": 0, 00:16:27.175 "data_size": 63488 00:16:27.175 }, 00:16:27.175 { 00:16:27.175 "name": "BaseBdev2", 00:16:27.175 "uuid": "6b3bf578-bff0-54be-951b-4e39478725c4", 00:16:27.175 "is_configured": true, 00:16:27.175 "data_offset": 2048, 00:16:27.175 "data_size": 63488 00:16:27.175 }, 00:16:27.175 { 00:16:27.175 "name": "BaseBdev3", 00:16:27.175 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:27.175 "is_configured": true, 00:16:27.175 "data_offset": 2048, 00:16:27.175 "data_size": 63488 00:16:27.175 }, 00:16:27.175 { 00:16:27.175 "name": "BaseBdev4", 00:16:27.175 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:27.175 "is_configured": true, 00:16:27.175 "data_offset": 2048, 00:16:27.175 "data_size": 63488 00:16:27.175 } 00:16:27.175 ] 00:16:27.175 }' 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.175 10:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.742 "name": "raid_bdev1", 00:16:27.742 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:27.742 "strip_size_kb": 0, 00:16:27.742 "state": "online", 00:16:27.742 "raid_level": "raid1", 00:16:27.742 "superblock": true, 00:16:27.742 "num_base_bdevs": 4, 00:16:27.742 "num_base_bdevs_discovered": 3, 00:16:27.742 "num_base_bdevs_operational": 3, 00:16:27.742 "base_bdevs_list": [ 00:16:27.742 { 00:16:27.742 "name": null, 00:16:27.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.742 "is_configured": false, 00:16:27.742 "data_offset": 0, 00:16:27.742 "data_size": 63488 00:16:27.742 }, 00:16:27.742 { 00:16:27.742 "name": "BaseBdev2", 00:16:27.742 "uuid": "6b3bf578-bff0-54be-951b-4e39478725c4", 00:16:27.742 "is_configured": true, 00:16:27.742 "data_offset": 2048, 00:16:27.742 "data_size": 63488 00:16:27.742 }, 00:16:27.742 { 00:16:27.742 "name": "BaseBdev3", 00:16:27.742 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:27.742 "is_configured": true, 00:16:27.742 "data_offset": 2048, 00:16:27.742 "data_size": 63488 00:16:27.742 }, 00:16:27.742 { 00:16:27.742 "name": "BaseBdev4", 00:16:27.742 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:27.742 "is_configured": true, 00:16:27.742 "data_offset": 2048, 00:16:27.742 "data_size": 63488 00:16:27.742 } 00:16:27.742 ] 00:16:27.742 }' 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.742 10:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.742 [2024-11-15 10:44:58.212505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.743 [2024-11-15 10:44:58.225761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:16:27.743 10:44:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.743 10:44:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:27.743 [2024-11-15 10:44:58.228086] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.677 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.677 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.677 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.677 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.677 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.936 "name": "raid_bdev1", 00:16:28.936 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:28.936 "strip_size_kb": 0, 00:16:28.936 "state": "online", 00:16:28.936 "raid_level": "raid1", 00:16:28.936 "superblock": true, 00:16:28.936 "num_base_bdevs": 4, 00:16:28.936 "num_base_bdevs_discovered": 4, 00:16:28.936 "num_base_bdevs_operational": 4, 00:16:28.936 "process": { 00:16:28.936 "type": "rebuild", 00:16:28.936 "target": "spare", 00:16:28.936 "progress": { 00:16:28.936 "blocks": 20480, 00:16:28.936 "percent": 32 00:16:28.936 } 00:16:28.936 }, 00:16:28.936 "base_bdevs_list": [ 00:16:28.936 { 00:16:28.936 "name": "spare", 00:16:28.936 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:28.936 "is_configured": true, 00:16:28.936 "data_offset": 2048, 00:16:28.936 "data_size": 63488 00:16:28.936 }, 00:16:28.936 { 00:16:28.936 "name": "BaseBdev2", 00:16:28.936 "uuid": "6b3bf578-bff0-54be-951b-4e39478725c4", 00:16:28.936 "is_configured": true, 00:16:28.936 "data_offset": 2048, 00:16:28.936 "data_size": 63488 00:16:28.936 }, 00:16:28.936 { 00:16:28.936 "name": "BaseBdev3", 00:16:28.936 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:28.936 "is_configured": true, 00:16:28.936 "data_offset": 2048, 00:16:28.936 "data_size": 63488 00:16:28.936 }, 00:16:28.936 { 00:16:28.936 "name": "BaseBdev4", 00:16:28.936 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:28.936 "is_configured": true, 00:16:28.936 "data_offset": 2048, 00:16:28.936 "data_size": 63488 00:16:28.936 } 00:16:28.936 ] 00:16:28.936 }' 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:28.936 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.936 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.936 [2024-11-15 10:44:59.393775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.195 [2024-11-15 10:44:59.534755] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.195 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.195 "name": "raid_bdev1", 00:16:29.195 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:29.195 "strip_size_kb": 0, 00:16:29.195 "state": "online", 00:16:29.195 "raid_level": "raid1", 00:16:29.195 "superblock": true, 00:16:29.195 "num_base_bdevs": 4, 00:16:29.195 "num_base_bdevs_discovered": 3, 00:16:29.195 "num_base_bdevs_operational": 3, 00:16:29.195 "process": { 00:16:29.195 "type": "rebuild", 00:16:29.195 "target": "spare", 00:16:29.195 "progress": { 00:16:29.195 "blocks": 24576, 00:16:29.195 "percent": 38 00:16:29.195 } 00:16:29.195 }, 00:16:29.195 "base_bdevs_list": [ 00:16:29.195 { 00:16:29.195 "name": "spare", 00:16:29.195 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:29.195 "is_configured": true, 00:16:29.195 "data_offset": 2048, 00:16:29.195 "data_size": 63488 00:16:29.195 }, 00:16:29.195 { 00:16:29.195 "name": null, 00:16:29.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.195 "is_configured": false, 00:16:29.195 "data_offset": 0, 00:16:29.195 "data_size": 63488 00:16:29.195 }, 00:16:29.195 { 00:16:29.195 "name": "BaseBdev3", 00:16:29.195 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:29.195 "is_configured": true, 00:16:29.195 "data_offset": 2048, 00:16:29.195 "data_size": 63488 00:16:29.195 }, 00:16:29.195 { 00:16:29.195 "name": "BaseBdev4", 00:16:29.195 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:29.195 "is_configured": true, 00:16:29.195 "data_offset": 2048, 00:16:29.195 "data_size": 63488 00:16:29.195 } 00:16:29.195 ] 00:16:29.196 }' 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=493 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.196 10:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.454 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.454 "name": "raid_bdev1", 00:16:29.454 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:29.454 "strip_size_kb": 0, 00:16:29.454 "state": "online", 00:16:29.454 "raid_level": "raid1", 00:16:29.454 "superblock": true, 00:16:29.454 "num_base_bdevs": 4, 00:16:29.454 "num_base_bdevs_discovered": 3, 00:16:29.454 "num_base_bdevs_operational": 3, 00:16:29.454 "process": { 00:16:29.454 "type": "rebuild", 00:16:29.454 "target": "spare", 00:16:29.454 "progress": { 00:16:29.454 "blocks": 26624, 00:16:29.454 "percent": 41 00:16:29.454 } 00:16:29.454 }, 00:16:29.454 "base_bdevs_list": [ 00:16:29.454 { 00:16:29.454 "name": "spare", 00:16:29.454 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:29.454 "is_configured": true, 00:16:29.454 "data_offset": 2048, 00:16:29.454 "data_size": 63488 00:16:29.454 }, 00:16:29.454 { 00:16:29.454 "name": null, 00:16:29.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.454 "is_configured": false, 00:16:29.454 "data_offset": 0, 00:16:29.454 "data_size": 63488 00:16:29.454 }, 00:16:29.454 { 00:16:29.454 "name": "BaseBdev3", 00:16:29.454 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:29.454 "is_configured": true, 00:16:29.454 "data_offset": 2048, 00:16:29.454 "data_size": 63488 00:16:29.454 }, 00:16:29.454 { 00:16:29.454 "name": "BaseBdev4", 00:16:29.454 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:29.454 "is_configured": true, 00:16:29.454 "data_offset": 2048, 00:16:29.454 "data_size": 63488 00:16:29.454 } 00:16:29.454 ] 00:16:29.454 }' 00:16:29.454 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.454 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.454 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.454 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.454 10:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.391 "name": "raid_bdev1", 00:16:30.391 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:30.391 "strip_size_kb": 0, 00:16:30.391 "state": "online", 00:16:30.391 "raid_level": "raid1", 00:16:30.391 "superblock": true, 00:16:30.391 "num_base_bdevs": 4, 00:16:30.391 "num_base_bdevs_discovered": 3, 00:16:30.391 "num_base_bdevs_operational": 3, 00:16:30.391 "process": { 00:16:30.391 "type": "rebuild", 00:16:30.391 "target": "spare", 00:16:30.391 "progress": { 00:16:30.391 "blocks": 51200, 00:16:30.391 "percent": 80 00:16:30.391 } 00:16:30.391 }, 00:16:30.391 "base_bdevs_list": [ 00:16:30.391 { 00:16:30.391 "name": "spare", 00:16:30.391 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:30.391 "is_configured": true, 00:16:30.391 "data_offset": 2048, 00:16:30.391 "data_size": 63488 00:16:30.391 }, 00:16:30.391 { 00:16:30.391 "name": null, 00:16:30.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.391 "is_configured": false, 00:16:30.391 "data_offset": 0, 00:16:30.391 "data_size": 63488 00:16:30.391 }, 00:16:30.391 { 00:16:30.391 "name": "BaseBdev3", 00:16:30.391 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:30.391 "is_configured": true, 00:16:30.391 "data_offset": 2048, 00:16:30.391 "data_size": 63488 00:16:30.391 }, 00:16:30.391 { 00:16:30.391 "name": "BaseBdev4", 00:16:30.391 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:30.391 "is_configured": true, 00:16:30.391 "data_offset": 2048, 00:16:30.391 "data_size": 63488 00:16:30.391 } 00:16:30.391 ] 00:16:30.391 }' 00:16:30.391 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.649 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.650 10:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.650 10:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.650 10:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.907 [2024-11-15 10:45:01.445280] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:30.907 [2024-11-15 10:45:01.445394] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:30.907 [2024-11-15 10:45:01.445555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.841 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.841 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.841 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.841 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.841 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.841 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.841 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.841 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.842 "name": "raid_bdev1", 00:16:31.842 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:31.842 "strip_size_kb": 0, 00:16:31.842 "state": "online", 00:16:31.842 "raid_level": "raid1", 00:16:31.842 "superblock": true, 00:16:31.842 "num_base_bdevs": 4, 00:16:31.842 "num_base_bdevs_discovered": 3, 00:16:31.842 "num_base_bdevs_operational": 3, 00:16:31.842 "base_bdevs_list": [ 00:16:31.842 { 00:16:31.842 "name": "spare", 00:16:31.842 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:31.842 "is_configured": true, 00:16:31.842 "data_offset": 2048, 00:16:31.842 "data_size": 63488 00:16:31.842 }, 00:16:31.842 { 00:16:31.842 "name": null, 00:16:31.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.842 "is_configured": false, 00:16:31.842 "data_offset": 0, 00:16:31.842 "data_size": 63488 00:16:31.842 }, 00:16:31.842 { 00:16:31.842 "name": "BaseBdev3", 00:16:31.842 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:31.842 "is_configured": true, 00:16:31.842 "data_offset": 2048, 00:16:31.842 "data_size": 63488 00:16:31.842 }, 00:16:31.842 { 00:16:31.842 "name": "BaseBdev4", 00:16:31.842 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:31.842 "is_configured": true, 00:16:31.842 "data_offset": 2048, 00:16:31.842 "data_size": 63488 00:16:31.842 } 00:16:31.842 ] 00:16:31.842 }' 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.842 "name": "raid_bdev1", 00:16:31.842 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:31.842 "strip_size_kb": 0, 00:16:31.842 "state": "online", 00:16:31.842 "raid_level": "raid1", 00:16:31.842 "superblock": true, 00:16:31.842 "num_base_bdevs": 4, 00:16:31.842 "num_base_bdevs_discovered": 3, 00:16:31.842 "num_base_bdevs_operational": 3, 00:16:31.842 "base_bdevs_list": [ 00:16:31.842 { 00:16:31.842 "name": "spare", 00:16:31.842 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:31.842 "is_configured": true, 00:16:31.842 "data_offset": 2048, 00:16:31.842 "data_size": 63488 00:16:31.842 }, 00:16:31.842 { 00:16:31.842 "name": null, 00:16:31.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.842 "is_configured": false, 00:16:31.842 "data_offset": 0, 00:16:31.842 "data_size": 63488 00:16:31.842 }, 00:16:31.842 { 00:16:31.842 "name": "BaseBdev3", 00:16:31.842 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:31.842 "is_configured": true, 00:16:31.842 "data_offset": 2048, 00:16:31.842 "data_size": 63488 00:16:31.842 }, 00:16:31.842 { 00:16:31.842 "name": "BaseBdev4", 00:16:31.842 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:31.842 "is_configured": true, 00:16:31.842 "data_offset": 2048, 00:16:31.842 "data_size": 63488 00:16:31.842 } 00:16:31.842 ] 00:16:31.842 }' 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.842 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.100 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.100 "name": "raid_bdev1", 00:16:32.100 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:32.100 "strip_size_kb": 0, 00:16:32.100 "state": "online", 00:16:32.100 "raid_level": "raid1", 00:16:32.100 "superblock": true, 00:16:32.100 "num_base_bdevs": 4, 00:16:32.100 "num_base_bdevs_discovered": 3, 00:16:32.100 "num_base_bdevs_operational": 3, 00:16:32.100 "base_bdevs_list": [ 00:16:32.100 { 00:16:32.100 "name": "spare", 00:16:32.100 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:32.100 "is_configured": true, 00:16:32.100 "data_offset": 2048, 00:16:32.100 "data_size": 63488 00:16:32.100 }, 00:16:32.100 { 00:16:32.100 "name": null, 00:16:32.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.100 "is_configured": false, 00:16:32.100 "data_offset": 0, 00:16:32.100 "data_size": 63488 00:16:32.100 }, 00:16:32.100 { 00:16:32.100 "name": "BaseBdev3", 00:16:32.100 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:32.100 "is_configured": true, 00:16:32.100 "data_offset": 2048, 00:16:32.100 "data_size": 63488 00:16:32.100 }, 00:16:32.100 { 00:16:32.100 "name": "BaseBdev4", 00:16:32.101 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:32.101 "is_configured": true, 00:16:32.101 "data_offset": 2048, 00:16:32.101 "data_size": 63488 00:16:32.101 } 00:16:32.101 ] 00:16:32.101 }' 00:16:32.101 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.101 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.359 [2024-11-15 10:45:02.832384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.359 [2024-11-15 10:45:02.832425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.359 [2024-11-15 10:45:02.832525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.359 [2024-11-15 10:45:02.832631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.359 [2024-11-15 10:45:02.832649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:32.359 10:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:32.926 /dev/nbd0 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.926 1+0 records in 00:16:32.926 1+0 records out 00:16:32.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572399 s, 7.2 MB/s 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:32.926 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:33.185 /dev/nbd1 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.185 1+0 records in 00:16:33.185 1+0 records out 00:16:33.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417691 s, 9.8 MB/s 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.185 10:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:33.763 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:33.763 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:33.763 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:33.763 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.763 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.763 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:33.763 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:33.763 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.763 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.763 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.022 [2024-11-15 10:45:04.362039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.022 [2024-11-15 10:45:04.362112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.022 [2024-11-15 10:45:04.362150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:34.022 [2024-11-15 10:45:04.362165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.022 [2024-11-15 10:45:04.364866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.022 [2024-11-15 10:45:04.364912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.022 [2024-11-15 10:45:04.365025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:34.022 [2024-11-15 10:45:04.365087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.022 [2024-11-15 10:45:04.365268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.022 [2024-11-15 10:45:04.365436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:34.022 spare 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.022 [2024-11-15 10:45:04.465583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:34.022 [2024-11-15 10:45:04.465640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:34.022 [2024-11-15 10:45:04.466059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:34.022 [2024-11-15 10:45:04.466315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:34.022 [2024-11-15 10:45:04.466336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:34.022 [2024-11-15 10:45:04.466592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.022 "name": "raid_bdev1", 00:16:34.022 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:34.022 "strip_size_kb": 0, 00:16:34.022 "state": "online", 00:16:34.022 "raid_level": "raid1", 00:16:34.022 "superblock": true, 00:16:34.022 "num_base_bdevs": 4, 00:16:34.022 "num_base_bdevs_discovered": 3, 00:16:34.022 "num_base_bdevs_operational": 3, 00:16:34.022 "base_bdevs_list": [ 00:16:34.022 { 00:16:34.022 "name": "spare", 00:16:34.022 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:34.022 "is_configured": true, 00:16:34.022 "data_offset": 2048, 00:16:34.022 "data_size": 63488 00:16:34.022 }, 00:16:34.022 { 00:16:34.022 "name": null, 00:16:34.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.022 "is_configured": false, 00:16:34.022 "data_offset": 2048, 00:16:34.022 "data_size": 63488 00:16:34.022 }, 00:16:34.022 { 00:16:34.022 "name": "BaseBdev3", 00:16:34.022 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:34.022 "is_configured": true, 00:16:34.022 "data_offset": 2048, 00:16:34.022 "data_size": 63488 00:16:34.022 }, 00:16:34.022 { 00:16:34.022 "name": "BaseBdev4", 00:16:34.022 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:34.022 "is_configured": true, 00:16:34.022 "data_offset": 2048, 00:16:34.022 "data_size": 63488 00:16:34.022 } 00:16:34.022 ] 00:16:34.022 }' 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.022 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.592 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.592 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.592 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.592 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.592 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.592 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.592 10:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.592 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.592 10:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.593 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.593 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.593 "name": "raid_bdev1", 00:16:34.593 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:34.593 "strip_size_kb": 0, 00:16:34.593 "state": "online", 00:16:34.593 "raid_level": "raid1", 00:16:34.593 "superblock": true, 00:16:34.593 "num_base_bdevs": 4, 00:16:34.593 "num_base_bdevs_discovered": 3, 00:16:34.593 "num_base_bdevs_operational": 3, 00:16:34.593 "base_bdevs_list": [ 00:16:34.593 { 00:16:34.593 "name": "spare", 00:16:34.593 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:34.593 "is_configured": true, 00:16:34.593 "data_offset": 2048, 00:16:34.593 "data_size": 63488 00:16:34.593 }, 00:16:34.593 { 00:16:34.593 "name": null, 00:16:34.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.593 "is_configured": false, 00:16:34.593 "data_offset": 2048, 00:16:34.593 "data_size": 63488 00:16:34.593 }, 00:16:34.593 { 00:16:34.593 "name": "BaseBdev3", 00:16:34.593 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:34.593 "is_configured": true, 00:16:34.593 "data_offset": 2048, 00:16:34.593 "data_size": 63488 00:16:34.593 }, 00:16:34.593 { 00:16:34.593 "name": "BaseBdev4", 00:16:34.593 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:34.593 "is_configured": true, 00:16:34.593 "data_offset": 2048, 00:16:34.593 "data_size": 63488 00:16:34.593 } 00:16:34.593 ] 00:16:34.593 }' 00:16:34.593 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.593 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.593 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.593 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.593 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.593 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.593 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.593 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.852 [2024-11-15 10:45:05.218807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.852 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.852 "name": "raid_bdev1", 00:16:34.852 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:34.852 "strip_size_kb": 0, 00:16:34.852 "state": "online", 00:16:34.852 "raid_level": "raid1", 00:16:34.852 "superblock": true, 00:16:34.852 "num_base_bdevs": 4, 00:16:34.852 "num_base_bdevs_discovered": 2, 00:16:34.852 "num_base_bdevs_operational": 2, 00:16:34.852 "base_bdevs_list": [ 00:16:34.852 { 00:16:34.852 "name": null, 00:16:34.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.852 "is_configured": false, 00:16:34.852 "data_offset": 0, 00:16:34.852 "data_size": 63488 00:16:34.852 }, 00:16:34.852 { 00:16:34.852 "name": null, 00:16:34.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.852 "is_configured": false, 00:16:34.852 "data_offset": 2048, 00:16:34.852 "data_size": 63488 00:16:34.852 }, 00:16:34.852 { 00:16:34.852 "name": "BaseBdev3", 00:16:34.852 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:34.852 "is_configured": true, 00:16:34.852 "data_offset": 2048, 00:16:34.852 "data_size": 63488 00:16:34.852 }, 00:16:34.852 { 00:16:34.852 "name": "BaseBdev4", 00:16:34.852 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:34.852 "is_configured": true, 00:16:34.852 "data_offset": 2048, 00:16:34.852 "data_size": 63488 00:16:34.852 } 00:16:34.852 ] 00:16:34.853 }' 00:16:34.853 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.853 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.421 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:35.421 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.421 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.421 [2024-11-15 10:45:05.747010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.421 [2024-11-15 10:45:05.747415] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:35.421 [2024-11-15 10:45:05.747449] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:35.421 [2024-11-15 10:45:05.747500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.421 [2024-11-15 10:45:05.761017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:16:35.421 10:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.421 10:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:35.421 [2024-11-15 10:45:05.763429] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.355 "name": "raid_bdev1", 00:16:36.355 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:36.355 "strip_size_kb": 0, 00:16:36.355 "state": "online", 00:16:36.355 "raid_level": "raid1", 00:16:36.355 "superblock": true, 00:16:36.355 "num_base_bdevs": 4, 00:16:36.355 "num_base_bdevs_discovered": 3, 00:16:36.355 "num_base_bdevs_operational": 3, 00:16:36.355 "process": { 00:16:36.355 "type": "rebuild", 00:16:36.355 "target": "spare", 00:16:36.355 "progress": { 00:16:36.355 "blocks": 20480, 00:16:36.355 "percent": 32 00:16:36.355 } 00:16:36.355 }, 00:16:36.355 "base_bdevs_list": [ 00:16:36.355 { 00:16:36.355 "name": "spare", 00:16:36.355 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:36.355 "is_configured": true, 00:16:36.355 "data_offset": 2048, 00:16:36.355 "data_size": 63488 00:16:36.355 }, 00:16:36.355 { 00:16:36.355 "name": null, 00:16:36.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.355 "is_configured": false, 00:16:36.355 "data_offset": 2048, 00:16:36.355 "data_size": 63488 00:16:36.355 }, 00:16:36.355 { 00:16:36.355 "name": "BaseBdev3", 00:16:36.355 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:36.355 "is_configured": true, 00:16:36.355 "data_offset": 2048, 00:16:36.355 "data_size": 63488 00:16:36.355 }, 00:16:36.355 { 00:16:36.355 "name": "BaseBdev4", 00:16:36.355 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:36.355 "is_configured": true, 00:16:36.355 "data_offset": 2048, 00:16:36.355 "data_size": 63488 00:16:36.355 } 00:16:36.355 ] 00:16:36.355 }' 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.355 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.611 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.611 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:36.611 10:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.611 10:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.611 [2024-11-15 10:45:06.928752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.611 [2024-11-15 10:45:06.969972] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:36.611 [2024-11-15 10:45:06.970054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.611 [2024-11-15 10:45:06.970083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.612 [2024-11-15 10:45:06.970094] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.612 10:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.612 10:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.612 10:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.612 "name": "raid_bdev1", 00:16:36.612 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:36.612 "strip_size_kb": 0, 00:16:36.612 "state": "online", 00:16:36.612 "raid_level": "raid1", 00:16:36.612 "superblock": true, 00:16:36.612 "num_base_bdevs": 4, 00:16:36.612 "num_base_bdevs_discovered": 2, 00:16:36.612 "num_base_bdevs_operational": 2, 00:16:36.612 "base_bdevs_list": [ 00:16:36.612 { 00:16:36.612 "name": null, 00:16:36.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.612 "is_configured": false, 00:16:36.612 "data_offset": 0, 00:16:36.612 "data_size": 63488 00:16:36.612 }, 00:16:36.612 { 00:16:36.612 "name": null, 00:16:36.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.612 "is_configured": false, 00:16:36.612 "data_offset": 2048, 00:16:36.612 "data_size": 63488 00:16:36.612 }, 00:16:36.612 { 00:16:36.612 "name": "BaseBdev3", 00:16:36.612 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:36.612 "is_configured": true, 00:16:36.612 "data_offset": 2048, 00:16:36.612 "data_size": 63488 00:16:36.612 }, 00:16:36.612 { 00:16:36.612 "name": "BaseBdev4", 00:16:36.612 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:36.612 "is_configured": true, 00:16:36.612 "data_offset": 2048, 00:16:36.612 "data_size": 63488 00:16:36.612 } 00:16:36.612 ] 00:16:36.612 }' 00:16:36.612 10:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.612 10:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.176 10:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:37.176 10:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.176 10:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.176 [2024-11-15 10:45:07.509211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:37.176 [2024-11-15 10:45:07.509432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.176 [2024-11-15 10:45:07.509604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:37.176 [2024-11-15 10:45:07.509631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.176 [2024-11-15 10:45:07.510219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.176 [2024-11-15 10:45:07.510256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:37.176 [2024-11-15 10:45:07.510394] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:37.176 [2024-11-15 10:45:07.510415] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:37.176 [2024-11-15 10:45:07.510431] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:37.176 [2024-11-15 10:45:07.510467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.176 [2024-11-15 10:45:07.523158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:37.176 spare 00:16:37.176 10:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.176 10:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:37.176 [2024-11-15 10:45:07.525464] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.110 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.111 "name": "raid_bdev1", 00:16:38.111 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:38.111 "strip_size_kb": 0, 00:16:38.111 "state": "online", 00:16:38.111 "raid_level": "raid1", 00:16:38.111 "superblock": true, 00:16:38.111 "num_base_bdevs": 4, 00:16:38.111 "num_base_bdevs_discovered": 3, 00:16:38.111 "num_base_bdevs_operational": 3, 00:16:38.111 "process": { 00:16:38.111 "type": "rebuild", 00:16:38.111 "target": "spare", 00:16:38.111 "progress": { 00:16:38.111 "blocks": 20480, 00:16:38.111 "percent": 32 00:16:38.111 } 00:16:38.111 }, 00:16:38.111 "base_bdevs_list": [ 00:16:38.111 { 00:16:38.111 "name": "spare", 00:16:38.111 "uuid": "2eb50172-2fd7-585e-bba5-dcd40136f7d5", 00:16:38.111 "is_configured": true, 00:16:38.111 "data_offset": 2048, 00:16:38.111 "data_size": 63488 00:16:38.111 }, 00:16:38.111 { 00:16:38.111 "name": null, 00:16:38.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.111 "is_configured": false, 00:16:38.111 "data_offset": 2048, 00:16:38.111 "data_size": 63488 00:16:38.111 }, 00:16:38.111 { 00:16:38.111 "name": "BaseBdev3", 00:16:38.111 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:38.111 "is_configured": true, 00:16:38.111 "data_offset": 2048, 00:16:38.111 "data_size": 63488 00:16:38.111 }, 00:16:38.111 { 00:16:38.111 "name": "BaseBdev4", 00:16:38.111 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:38.111 "is_configured": true, 00:16:38.111 "data_offset": 2048, 00:16:38.111 "data_size": 63488 00:16:38.111 } 00:16:38.111 ] 00:16:38.111 }' 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.111 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.370 [2024-11-15 10:45:08.683280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.370 [2024-11-15 10:45:08.732249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.370 [2024-11-15 10:45:08.732367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.370 [2024-11-15 10:45:08.732396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.370 [2024-11-15 10:45:08.732411] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.370 "name": "raid_bdev1", 00:16:38.370 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:38.370 "strip_size_kb": 0, 00:16:38.370 "state": "online", 00:16:38.370 "raid_level": "raid1", 00:16:38.370 "superblock": true, 00:16:38.370 "num_base_bdevs": 4, 00:16:38.370 "num_base_bdevs_discovered": 2, 00:16:38.370 "num_base_bdevs_operational": 2, 00:16:38.370 "base_bdevs_list": [ 00:16:38.370 { 00:16:38.370 "name": null, 00:16:38.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.370 "is_configured": false, 00:16:38.370 "data_offset": 0, 00:16:38.370 "data_size": 63488 00:16:38.370 }, 00:16:38.370 { 00:16:38.370 "name": null, 00:16:38.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.370 "is_configured": false, 00:16:38.370 "data_offset": 2048, 00:16:38.370 "data_size": 63488 00:16:38.370 }, 00:16:38.370 { 00:16:38.370 "name": "BaseBdev3", 00:16:38.370 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:38.370 "is_configured": true, 00:16:38.370 "data_offset": 2048, 00:16:38.370 "data_size": 63488 00:16:38.370 }, 00:16:38.370 { 00:16:38.370 "name": "BaseBdev4", 00:16:38.370 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:38.370 "is_configured": true, 00:16:38.370 "data_offset": 2048, 00:16:38.370 "data_size": 63488 00:16:38.370 } 00:16:38.370 ] 00:16:38.370 }' 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.370 10:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.938 "name": "raid_bdev1", 00:16:38.938 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:38.938 "strip_size_kb": 0, 00:16:38.938 "state": "online", 00:16:38.938 "raid_level": "raid1", 00:16:38.938 "superblock": true, 00:16:38.938 "num_base_bdevs": 4, 00:16:38.938 "num_base_bdevs_discovered": 2, 00:16:38.938 "num_base_bdevs_operational": 2, 00:16:38.938 "base_bdevs_list": [ 00:16:38.938 { 00:16:38.938 "name": null, 00:16:38.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.938 "is_configured": false, 00:16:38.938 "data_offset": 0, 00:16:38.938 "data_size": 63488 00:16:38.938 }, 00:16:38.938 { 00:16:38.938 "name": null, 00:16:38.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.938 "is_configured": false, 00:16:38.938 "data_offset": 2048, 00:16:38.938 "data_size": 63488 00:16:38.938 }, 00:16:38.938 { 00:16:38.938 "name": "BaseBdev3", 00:16:38.938 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:38.938 "is_configured": true, 00:16:38.938 "data_offset": 2048, 00:16:38.938 "data_size": 63488 00:16:38.938 }, 00:16:38.938 { 00:16:38.938 "name": "BaseBdev4", 00:16:38.938 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:38.938 "is_configured": true, 00:16:38.938 "data_offset": 2048, 00:16:38.938 "data_size": 63488 00:16:38.938 } 00:16:38.938 ] 00:16:38.938 }' 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.938 [2024-11-15 10:45:09.424147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:38.938 [2024-11-15 10:45:09.424372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.938 [2024-11-15 10:45:09.424413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:38.938 [2024-11-15 10:45:09.424432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.938 [2024-11-15 10:45:09.424996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.938 [2024-11-15 10:45:09.425033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:38.938 [2024-11-15 10:45:09.425130] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:38.938 [2024-11-15 10:45:09.425156] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:38.938 [2024-11-15 10:45:09.425167] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:38.938 [2024-11-15 10:45:09.425195] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:38.938 BaseBdev1 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.938 10:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.321 "name": "raid_bdev1", 00:16:40.321 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:40.321 "strip_size_kb": 0, 00:16:40.321 "state": "online", 00:16:40.321 "raid_level": "raid1", 00:16:40.321 "superblock": true, 00:16:40.321 "num_base_bdevs": 4, 00:16:40.321 "num_base_bdevs_discovered": 2, 00:16:40.321 "num_base_bdevs_operational": 2, 00:16:40.321 "base_bdevs_list": [ 00:16:40.321 { 00:16:40.321 "name": null, 00:16:40.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.321 "is_configured": false, 00:16:40.321 "data_offset": 0, 00:16:40.321 "data_size": 63488 00:16:40.321 }, 00:16:40.321 { 00:16:40.321 "name": null, 00:16:40.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.321 "is_configured": false, 00:16:40.321 "data_offset": 2048, 00:16:40.321 "data_size": 63488 00:16:40.321 }, 00:16:40.321 { 00:16:40.321 "name": "BaseBdev3", 00:16:40.321 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:40.321 "is_configured": true, 00:16:40.321 "data_offset": 2048, 00:16:40.321 "data_size": 63488 00:16:40.321 }, 00:16:40.321 { 00:16:40.321 "name": "BaseBdev4", 00:16:40.321 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:40.321 "is_configured": true, 00:16:40.321 "data_offset": 2048, 00:16:40.321 "data_size": 63488 00:16:40.321 } 00:16:40.321 ] 00:16:40.321 }' 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.321 10:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.579 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.579 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.579 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.579 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.579 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.579 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.579 10:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.579 10:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.579 10:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.579 10:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.580 "name": "raid_bdev1", 00:16:40.580 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:40.580 "strip_size_kb": 0, 00:16:40.580 "state": "online", 00:16:40.580 "raid_level": "raid1", 00:16:40.580 "superblock": true, 00:16:40.580 "num_base_bdevs": 4, 00:16:40.580 "num_base_bdevs_discovered": 2, 00:16:40.580 "num_base_bdevs_operational": 2, 00:16:40.580 "base_bdevs_list": [ 00:16:40.580 { 00:16:40.580 "name": null, 00:16:40.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.580 "is_configured": false, 00:16:40.580 "data_offset": 0, 00:16:40.580 "data_size": 63488 00:16:40.580 }, 00:16:40.580 { 00:16:40.580 "name": null, 00:16:40.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.580 "is_configured": false, 00:16:40.580 "data_offset": 2048, 00:16:40.580 "data_size": 63488 00:16:40.580 }, 00:16:40.580 { 00:16:40.580 "name": "BaseBdev3", 00:16:40.580 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:40.580 "is_configured": true, 00:16:40.580 "data_offset": 2048, 00:16:40.580 "data_size": 63488 00:16:40.580 }, 00:16:40.580 { 00:16:40.580 "name": "BaseBdev4", 00:16:40.580 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:40.580 "is_configured": true, 00:16:40.580 "data_offset": 2048, 00:16:40.580 "data_size": 63488 00:16:40.580 } 00:16:40.580 ] 00:16:40.580 }' 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.580 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.580 [2024-11-15 10:45:11.132790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.580 [2024-11-15 10:45:11.133044] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:40.580 [2024-11-15 10:45:11.133070] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:40.837 request: 00:16:40.837 { 00:16:40.837 "base_bdev": "BaseBdev1", 00:16:40.837 "raid_bdev": "raid_bdev1", 00:16:40.837 "method": "bdev_raid_add_base_bdev", 00:16:40.837 "req_id": 1 00:16:40.837 } 00:16:40.837 Got JSON-RPC error response 00:16:40.837 response: 00:16:40.837 { 00:16:40.837 "code": -22, 00:16:40.837 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:40.837 } 00:16:40.837 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:40.837 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:40.837 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:40.837 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:40.837 10:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:40.837 10:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.771 "name": "raid_bdev1", 00:16:41.771 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:41.771 "strip_size_kb": 0, 00:16:41.771 "state": "online", 00:16:41.771 "raid_level": "raid1", 00:16:41.771 "superblock": true, 00:16:41.771 "num_base_bdevs": 4, 00:16:41.771 "num_base_bdevs_discovered": 2, 00:16:41.771 "num_base_bdevs_operational": 2, 00:16:41.771 "base_bdevs_list": [ 00:16:41.771 { 00:16:41.771 "name": null, 00:16:41.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.771 "is_configured": false, 00:16:41.771 "data_offset": 0, 00:16:41.771 "data_size": 63488 00:16:41.771 }, 00:16:41.771 { 00:16:41.771 "name": null, 00:16:41.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.771 "is_configured": false, 00:16:41.771 "data_offset": 2048, 00:16:41.771 "data_size": 63488 00:16:41.771 }, 00:16:41.771 { 00:16:41.771 "name": "BaseBdev3", 00:16:41.771 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:41.771 "is_configured": true, 00:16:41.771 "data_offset": 2048, 00:16:41.771 "data_size": 63488 00:16:41.771 }, 00:16:41.771 { 00:16:41.771 "name": "BaseBdev4", 00:16:41.771 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:41.771 "is_configured": true, 00:16:41.771 "data_offset": 2048, 00:16:41.771 "data_size": 63488 00:16:41.771 } 00:16:41.771 ] 00:16:41.771 }' 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.771 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.336 "name": "raid_bdev1", 00:16:42.336 "uuid": "42786248-7ec2-448f-9abc-2e19147a35b8", 00:16:42.336 "strip_size_kb": 0, 00:16:42.336 "state": "online", 00:16:42.336 "raid_level": "raid1", 00:16:42.336 "superblock": true, 00:16:42.336 "num_base_bdevs": 4, 00:16:42.336 "num_base_bdevs_discovered": 2, 00:16:42.336 "num_base_bdevs_operational": 2, 00:16:42.336 "base_bdevs_list": [ 00:16:42.336 { 00:16:42.336 "name": null, 00:16:42.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.336 "is_configured": false, 00:16:42.336 "data_offset": 0, 00:16:42.336 "data_size": 63488 00:16:42.336 }, 00:16:42.336 { 00:16:42.336 "name": null, 00:16:42.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.336 "is_configured": false, 00:16:42.336 "data_offset": 2048, 00:16:42.336 "data_size": 63488 00:16:42.336 }, 00:16:42.336 { 00:16:42.336 "name": "BaseBdev3", 00:16:42.336 "uuid": "2c89dc67-bff7-5283-80b2-ef9f8c6c4e4d", 00:16:42.336 "is_configured": true, 00:16:42.336 "data_offset": 2048, 00:16:42.336 "data_size": 63488 00:16:42.336 }, 00:16:42.336 { 00:16:42.336 "name": "BaseBdev4", 00:16:42.336 "uuid": "47cc1d0d-d3e6-5204-8d50-777795184375", 00:16:42.336 "is_configured": true, 00:16:42.336 "data_offset": 2048, 00:16:42.336 "data_size": 63488 00:16:42.336 } 00:16:42.336 ] 00:16:42.336 }' 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78416 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78416 ']' 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78416 00:16:42.336 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:42.337 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:42.337 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78416 00:16:42.337 killing process with pid 78416 00:16:42.337 Received shutdown signal, test time was about 60.000000 seconds 00:16:42.337 00:16:42.337 Latency(us) 00:16:42.337 [2024-11-15T10:45:12.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.337 [2024-11-15T10:45:12.897Z] =================================================================================================================== 00:16:42.337 [2024-11-15T10:45:12.897Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:42.337 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:42.337 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:42.337 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78416' 00:16:42.337 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78416 00:16:42.337 [2024-11-15 10:45:12.833161] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.337 10:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78416 00:16:42.337 [2024-11-15 10:45:12.833325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.337 [2024-11-15 10:45:12.833429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.337 [2024-11-15 10:45:12.833447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:42.901 [2024-11-15 10:45:13.244247] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.834 ************************************ 00:16:43.834 END TEST raid_rebuild_test_sb 00:16:43.834 ************************************ 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:43.834 00:16:43.834 real 0m29.553s 00:16:43.834 user 0m35.850s 00:16:43.834 sys 0m4.086s 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.834 10:45:14 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:16:43.834 10:45:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:43.834 10:45:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:43.834 10:45:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.834 ************************************ 00:16:43.834 START TEST raid_rebuild_test_io 00:16:43.834 ************************************ 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:43.834 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79209 00:16:43.835 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:43.835 10:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79209 00:16:43.835 10:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 79209 ']' 00:16:43.835 10:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.835 10:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:43.835 10:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.835 10:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:43.835 10:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.093 [2024-11-15 10:45:14.414859] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:16:44.093 [2024-11-15 10:45:14.415218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:44.093 Zero copy mechanism will not be used. 00:16:44.093 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79209 ] 00:16:44.093 [2024-11-15 10:45:14.597593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.352 [2024-11-15 10:45:14.699246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.352 [2024-11-15 10:45:14.879015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.352 [2024-11-15 10:45:14.879065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.922 BaseBdev1_malloc 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.922 [2024-11-15 10:45:15.424940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:44.922 [2024-11-15 10:45:15.425018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.922 [2024-11-15 10:45:15.425050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:44.922 [2024-11-15 10:45:15.425069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.922 [2024-11-15 10:45:15.427680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.922 [2024-11-15 10:45:15.427734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:44.922 BaseBdev1 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.922 BaseBdev2_malloc 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.922 [2024-11-15 10:45:15.472302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:44.922 [2024-11-15 10:45:15.472390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.922 [2024-11-15 10:45:15.472433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:44.922 [2024-11-15 10:45:15.472452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.922 [2024-11-15 10:45:15.474978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.922 [2024-11-15 10:45:15.475035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:44.922 BaseBdev2 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.922 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.181 BaseBdev3_malloc 00:16:45.181 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.181 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:45.181 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.181 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.181 [2024-11-15 10:45:15.529486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:45.181 [2024-11-15 10:45:15.529558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.181 [2024-11-15 10:45:15.529590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:45.181 [2024-11-15 10:45:15.529609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.181 [2024-11-15 10:45:15.532138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.181 [2024-11-15 10:45:15.532189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:45.181 BaseBdev3 00:16:45.181 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.181 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 BaseBdev4_malloc 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 [2024-11-15 10:45:15.573087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:45.182 [2024-11-15 10:45:15.573173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.182 [2024-11-15 10:45:15.573208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:45.182 [2024-11-15 10:45:15.573226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.182 [2024-11-15 10:45:15.575929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.182 [2024-11-15 10:45:15.576113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:45.182 BaseBdev4 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 spare_malloc 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 spare_delay 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 [2024-11-15 10:45:15.628901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.182 [2024-11-15 10:45:15.628970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.182 [2024-11-15 10:45:15.628998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:45.182 [2024-11-15 10:45:15.629016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.182 [2024-11-15 10:45:15.631604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.182 [2024-11-15 10:45:15.631654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.182 spare 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 [2024-11-15 10:45:15.636951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.182 [2024-11-15 10:45:15.639172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.182 [2024-11-15 10:45:15.639418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.182 [2024-11-15 10:45:15.639523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:45.182 [2024-11-15 10:45:15.639641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:45.182 [2024-11-15 10:45:15.639665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:45.182 [2024-11-15 10:45:15.639995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:45.182 [2024-11-15 10:45:15.640212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:45.182 [2024-11-15 10:45:15.640232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:45.182 [2024-11-15 10:45:15.640452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.182 "name": "raid_bdev1", 00:16:45.182 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:45.182 "strip_size_kb": 0, 00:16:45.182 "state": "online", 00:16:45.182 "raid_level": "raid1", 00:16:45.182 "superblock": false, 00:16:45.182 "num_base_bdevs": 4, 00:16:45.182 "num_base_bdevs_discovered": 4, 00:16:45.182 "num_base_bdevs_operational": 4, 00:16:45.182 "base_bdevs_list": [ 00:16:45.182 { 00:16:45.182 "name": "BaseBdev1", 00:16:45.182 "uuid": "cd0610bf-d5f9-59e9-aeb2-e05f82827c55", 00:16:45.182 "is_configured": true, 00:16:45.182 "data_offset": 0, 00:16:45.182 "data_size": 65536 00:16:45.182 }, 00:16:45.182 { 00:16:45.182 "name": "BaseBdev2", 00:16:45.182 "uuid": "81442263-f5a3-5d93-9103-d63b1519ba84", 00:16:45.182 "is_configured": true, 00:16:45.182 "data_offset": 0, 00:16:45.182 "data_size": 65536 00:16:45.182 }, 00:16:45.182 { 00:16:45.182 "name": "BaseBdev3", 00:16:45.182 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:45.182 "is_configured": true, 00:16:45.182 "data_offset": 0, 00:16:45.182 "data_size": 65536 00:16:45.182 }, 00:16:45.182 { 00:16:45.182 "name": "BaseBdev4", 00:16:45.182 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:45.182 "is_configured": true, 00:16:45.182 "data_offset": 0, 00:16:45.182 "data_size": 65536 00:16:45.182 } 00:16:45.182 ] 00:16:45.182 }' 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.182 10:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.749 [2024-11-15 10:45:16.153517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.749 [2024-11-15 10:45:16.269117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.749 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.008 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.008 "name": "raid_bdev1", 00:16:46.008 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:46.008 "strip_size_kb": 0, 00:16:46.008 "state": "online", 00:16:46.008 "raid_level": "raid1", 00:16:46.008 "superblock": false, 00:16:46.008 "num_base_bdevs": 4, 00:16:46.008 "num_base_bdevs_discovered": 3, 00:16:46.008 "num_base_bdevs_operational": 3, 00:16:46.008 "base_bdevs_list": [ 00:16:46.008 { 00:16:46.008 "name": null, 00:16:46.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.008 "is_configured": false, 00:16:46.008 "data_offset": 0, 00:16:46.008 "data_size": 65536 00:16:46.008 }, 00:16:46.008 { 00:16:46.008 "name": "BaseBdev2", 00:16:46.008 "uuid": "81442263-f5a3-5d93-9103-d63b1519ba84", 00:16:46.008 "is_configured": true, 00:16:46.008 "data_offset": 0, 00:16:46.008 "data_size": 65536 00:16:46.008 }, 00:16:46.008 { 00:16:46.008 "name": "BaseBdev3", 00:16:46.008 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:46.008 "is_configured": true, 00:16:46.008 "data_offset": 0, 00:16:46.008 "data_size": 65536 00:16:46.008 }, 00:16:46.008 { 00:16:46.008 "name": "BaseBdev4", 00:16:46.008 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:46.008 "is_configured": true, 00:16:46.008 "data_offset": 0, 00:16:46.008 "data_size": 65536 00:16:46.008 } 00:16:46.008 ] 00:16:46.008 }' 00:16:46.008 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.008 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.008 [2024-11-15 10:45:16.408103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:46.008 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:46.008 Zero copy mechanism will not be used. 00:16:46.008 Running I/O for 60 seconds... 00:16:46.266 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:46.266 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.266 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.266 [2024-11-15 10:45:16.786067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.524 10:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.524 10:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:46.524 [2024-11-15 10:45:16.884487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:46.524 [2024-11-15 10:45:16.886892] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:46.524 [2024-11-15 10:45:17.006808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:46.524 [2024-11-15 10:45:17.007959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:46.782 [2024-11-15 10:45:17.229482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:46.782 [2024-11-15 10:45:17.230323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:47.299 141.00 IOPS, 423.00 MiB/s [2024-11-15T10:45:17.859Z] [2024-11-15 10:45:17.610818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:47.299 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.299 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.299 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.299 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.299 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.299 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.299 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.299 10:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.299 10:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.299 [2024-11-15 10:45:17.848559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:47.299 [2024-11-15 10:45:17.849166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:47.560 10:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.560 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.560 "name": "raid_bdev1", 00:16:47.560 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:47.560 "strip_size_kb": 0, 00:16:47.560 "state": "online", 00:16:47.560 "raid_level": "raid1", 00:16:47.560 "superblock": false, 00:16:47.560 "num_base_bdevs": 4, 00:16:47.560 "num_base_bdevs_discovered": 4, 00:16:47.560 "num_base_bdevs_operational": 4, 00:16:47.560 "process": { 00:16:47.560 "type": "rebuild", 00:16:47.560 "target": "spare", 00:16:47.560 "progress": { 00:16:47.560 "blocks": 10240, 00:16:47.560 "percent": 15 00:16:47.560 } 00:16:47.560 }, 00:16:47.560 "base_bdevs_list": [ 00:16:47.560 { 00:16:47.560 "name": "spare", 00:16:47.560 "uuid": "0798a0f8-82f6-5c66-af95-451cd392f615", 00:16:47.560 "is_configured": true, 00:16:47.560 "data_offset": 0, 00:16:47.560 "data_size": 65536 00:16:47.560 }, 00:16:47.560 { 00:16:47.560 "name": "BaseBdev2", 00:16:47.560 "uuid": "81442263-f5a3-5d93-9103-d63b1519ba84", 00:16:47.560 "is_configured": true, 00:16:47.560 "data_offset": 0, 00:16:47.560 "data_size": 65536 00:16:47.560 }, 00:16:47.560 { 00:16:47.560 "name": "BaseBdev3", 00:16:47.560 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:47.560 "is_configured": true, 00:16:47.560 "data_offset": 0, 00:16:47.560 "data_size": 65536 00:16:47.560 }, 00:16:47.560 { 00:16:47.560 "name": "BaseBdev4", 00:16:47.560 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:47.560 "is_configured": true, 00:16:47.560 "data_offset": 0, 00:16:47.560 "data_size": 65536 00:16:47.560 } 00:16:47.560 ] 00:16:47.560 }' 00:16:47.560 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.560 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.560 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.560 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.560 10:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:47.560 10:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.560 10:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.560 [2024-11-15 10:45:18.000098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.821 [2024-11-15 10:45:18.193596] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.821 [2024-11-15 10:45:18.204692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.821 [2024-11-15 10:45:18.204758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.821 [2024-11-15 10:45:18.204777] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.821 [2024-11-15 10:45:18.234479] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.821 "name": "raid_bdev1", 00:16:47.821 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:47.821 "strip_size_kb": 0, 00:16:47.821 "state": "online", 00:16:47.821 "raid_level": "raid1", 00:16:47.821 "superblock": false, 00:16:47.821 "num_base_bdevs": 4, 00:16:47.821 "num_base_bdevs_discovered": 3, 00:16:47.821 "num_base_bdevs_operational": 3, 00:16:47.821 "base_bdevs_list": [ 00:16:47.821 { 00:16:47.821 "name": null, 00:16:47.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.821 "is_configured": false, 00:16:47.821 "data_offset": 0, 00:16:47.821 "data_size": 65536 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "name": "BaseBdev2", 00:16:47.821 "uuid": "81442263-f5a3-5d93-9103-d63b1519ba84", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 0, 00:16:47.821 "data_size": 65536 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "name": "BaseBdev3", 00:16:47.821 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 0, 00:16:47.821 "data_size": 65536 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "name": "BaseBdev4", 00:16:47.821 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 0, 00:16:47.821 "data_size": 65536 00:16:47.821 } 00:16:47.821 ] 00:16:47.821 }' 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.821 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.338 108.50 IOPS, 325.50 MiB/s [2024-11-15T10:45:18.898Z] 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.338 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.338 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.338 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.338 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.338 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.338 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.338 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.338 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.338 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.338 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.338 "name": "raid_bdev1", 00:16:48.338 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:48.338 "strip_size_kb": 0, 00:16:48.338 "state": "online", 00:16:48.338 "raid_level": "raid1", 00:16:48.338 "superblock": false, 00:16:48.338 "num_base_bdevs": 4, 00:16:48.338 "num_base_bdevs_discovered": 3, 00:16:48.338 "num_base_bdevs_operational": 3, 00:16:48.338 "base_bdevs_list": [ 00:16:48.338 { 00:16:48.338 "name": null, 00:16:48.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.338 "is_configured": false, 00:16:48.338 "data_offset": 0, 00:16:48.338 "data_size": 65536 00:16:48.338 }, 00:16:48.338 { 00:16:48.338 "name": "BaseBdev2", 00:16:48.338 "uuid": "81442263-f5a3-5d93-9103-d63b1519ba84", 00:16:48.338 "is_configured": true, 00:16:48.338 "data_offset": 0, 00:16:48.338 "data_size": 65536 00:16:48.338 }, 00:16:48.338 { 00:16:48.338 "name": "BaseBdev3", 00:16:48.338 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:48.338 "is_configured": true, 00:16:48.338 "data_offset": 0, 00:16:48.338 "data_size": 65536 00:16:48.338 }, 00:16:48.338 { 00:16:48.339 "name": "BaseBdev4", 00:16:48.339 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:48.339 "is_configured": true, 00:16:48.339 "data_offset": 0, 00:16:48.339 "data_size": 65536 00:16:48.339 } 00:16:48.339 ] 00:16:48.339 }' 00:16:48.339 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.339 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.339 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.339 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.339 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.339 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.339 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.597 [2024-11-15 10:45:18.900797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.597 10:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.597 10:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:48.598 [2024-11-15 10:45:18.990688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:48.598 [2024-11-15 10:45:18.993058] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.598 [2024-11-15 10:45:19.098884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:48.598 [2024-11-15 10:45:19.099929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:48.857 [2024-11-15 10:45:19.311073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:48.857 [2024-11-15 10:45:19.311411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:49.115 126.00 IOPS, 378.00 MiB/s [2024-11-15T10:45:19.676Z] [2024-11-15 10:45:19.649957] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:49.116 [2024-11-15 10:45:19.650581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:49.682 10:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.682 10:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.682 10:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.682 10:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.682 10:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.682 10:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.682 10:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.682 10:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.682 10:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.682 [2024-11-15 10:45:19.992434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.682 "name": "raid_bdev1", 00:16:49.682 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:49.682 "strip_size_kb": 0, 00:16:49.682 "state": "online", 00:16:49.682 "raid_level": "raid1", 00:16:49.682 "superblock": false, 00:16:49.682 "num_base_bdevs": 4, 00:16:49.682 "num_base_bdevs_discovered": 4, 00:16:49.682 "num_base_bdevs_operational": 4, 00:16:49.682 "process": { 00:16:49.682 "type": "rebuild", 00:16:49.682 "target": "spare", 00:16:49.682 "progress": { 00:16:49.682 "blocks": 12288, 00:16:49.682 "percent": 18 00:16:49.682 } 00:16:49.682 }, 00:16:49.682 "base_bdevs_list": [ 00:16:49.682 { 00:16:49.682 "name": "spare", 00:16:49.682 "uuid": "0798a0f8-82f6-5c66-af95-451cd392f615", 00:16:49.682 "is_configured": true, 00:16:49.682 "data_offset": 0, 00:16:49.682 "data_size": 65536 00:16:49.682 }, 00:16:49.682 { 00:16:49.682 "name": "BaseBdev2", 00:16:49.682 "uuid": "81442263-f5a3-5d93-9103-d63b1519ba84", 00:16:49.682 "is_configured": true, 00:16:49.682 "data_offset": 0, 00:16:49.682 "data_size": 65536 00:16:49.682 }, 00:16:49.682 { 00:16:49.682 "name": "BaseBdev3", 00:16:49.682 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:49.682 "is_configured": true, 00:16:49.682 "data_offset": 0, 00:16:49.682 "data_size": 65536 00:16:49.682 }, 00:16:49.682 { 00:16:49.682 "name": "BaseBdev4", 00:16:49.682 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:49.682 "is_configured": true, 00:16:49.682 "data_offset": 0, 00:16:49.682 "data_size": 65536 00:16:49.682 } 00:16:49.682 ] 00:16:49.682 }' 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.682 10:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.682 [2024-11-15 10:45:20.135037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.682 [2024-11-15 10:45:20.216918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:49.941 [2024-11-15 10:45:20.327070] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:49.941 [2024-11-15 10:45:20.327126] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:49.941 [2024-11-15 10:45:20.336905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:49.941 10:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.941 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:49.941 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:49.941 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.941 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.941 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.941 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.942 "name": "raid_bdev1", 00:16:49.942 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:49.942 "strip_size_kb": 0, 00:16:49.942 "state": "online", 00:16:49.942 "raid_level": "raid1", 00:16:49.942 "superblock": false, 00:16:49.942 "num_base_bdevs": 4, 00:16:49.942 "num_base_bdevs_discovered": 3, 00:16:49.942 "num_base_bdevs_operational": 3, 00:16:49.942 "process": { 00:16:49.942 "type": "rebuild", 00:16:49.942 "target": "spare", 00:16:49.942 "progress": { 00:16:49.942 "blocks": 16384, 00:16:49.942 "percent": 25 00:16:49.942 } 00:16:49.942 }, 00:16:49.942 "base_bdevs_list": [ 00:16:49.942 { 00:16:49.942 "name": "spare", 00:16:49.942 "uuid": "0798a0f8-82f6-5c66-af95-451cd392f615", 00:16:49.942 "is_configured": true, 00:16:49.942 "data_offset": 0, 00:16:49.942 "data_size": 65536 00:16:49.942 }, 00:16:49.942 { 00:16:49.942 "name": null, 00:16:49.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.942 "is_configured": false, 00:16:49.942 "data_offset": 0, 00:16:49.942 "data_size": 65536 00:16:49.942 }, 00:16:49.942 { 00:16:49.942 "name": "BaseBdev3", 00:16:49.942 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:49.942 "is_configured": true, 00:16:49.942 "data_offset": 0, 00:16:49.942 "data_size": 65536 00:16:49.942 }, 00:16:49.942 { 00:16:49.942 "name": "BaseBdev4", 00:16:49.942 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:49.942 "is_configured": true, 00:16:49.942 "data_offset": 0, 00:16:49.942 "data_size": 65536 00:16:49.942 } 00:16:49.942 ] 00:16:49.942 }' 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.942 109.00 IOPS, 327.00 MiB/s [2024-11-15T10:45:20.502Z] 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=514 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.942 10:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.201 10:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.201 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.201 "name": "raid_bdev1", 00:16:50.201 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:50.201 "strip_size_kb": 0, 00:16:50.201 "state": "online", 00:16:50.201 "raid_level": "raid1", 00:16:50.201 "superblock": false, 00:16:50.201 "num_base_bdevs": 4, 00:16:50.201 "num_base_bdevs_discovered": 3, 00:16:50.201 "num_base_bdevs_operational": 3, 00:16:50.201 "process": { 00:16:50.201 "type": "rebuild", 00:16:50.201 "target": "spare", 00:16:50.201 "progress": { 00:16:50.201 "blocks": 16384, 00:16:50.201 "percent": 25 00:16:50.201 } 00:16:50.201 }, 00:16:50.201 "base_bdevs_list": [ 00:16:50.201 { 00:16:50.201 "name": "spare", 00:16:50.201 "uuid": "0798a0f8-82f6-5c66-af95-451cd392f615", 00:16:50.201 "is_configured": true, 00:16:50.201 "data_offset": 0, 00:16:50.201 "data_size": 65536 00:16:50.201 }, 00:16:50.201 { 00:16:50.201 "name": null, 00:16:50.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.201 "is_configured": false, 00:16:50.201 "data_offset": 0, 00:16:50.201 "data_size": 65536 00:16:50.201 }, 00:16:50.201 { 00:16:50.201 "name": "BaseBdev3", 00:16:50.201 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:50.201 "is_configured": true, 00:16:50.201 "data_offset": 0, 00:16:50.201 "data_size": 65536 00:16:50.201 }, 00:16:50.201 { 00:16:50.201 "name": "BaseBdev4", 00:16:50.201 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:50.201 "is_configured": true, 00:16:50.201 "data_offset": 0, 00:16:50.201 "data_size": 65536 00:16:50.201 } 00:16:50.201 ] 00:16:50.201 }' 00:16:50.201 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.201 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.201 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.201 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.201 10:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.460 [2024-11-15 10:45:20.801829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:50.719 [2024-11-15 10:45:21.155472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:51.237 101.40 IOPS, 304.20 MiB/s [2024-11-15T10:45:21.797Z] [2024-11-15 10:45:21.606253] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:51.237 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.237 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.237 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.237 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.237 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.237 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.238 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.238 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.238 10:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.238 10:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.238 10:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.238 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.238 "name": "raid_bdev1", 00:16:51.238 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:51.238 "strip_size_kb": 0, 00:16:51.238 "state": "online", 00:16:51.238 "raid_level": "raid1", 00:16:51.238 "superblock": false, 00:16:51.238 "num_base_bdevs": 4, 00:16:51.238 "num_base_bdevs_discovered": 3, 00:16:51.238 "num_base_bdevs_operational": 3, 00:16:51.238 "process": { 00:16:51.238 "type": "rebuild", 00:16:51.238 "target": "spare", 00:16:51.238 "progress": { 00:16:51.238 "blocks": 32768, 00:16:51.238 "percent": 50 00:16:51.238 } 00:16:51.238 }, 00:16:51.238 "base_bdevs_list": [ 00:16:51.238 { 00:16:51.238 "name": "spare", 00:16:51.238 "uuid": "0798a0f8-82f6-5c66-af95-451cd392f615", 00:16:51.238 "is_configured": true, 00:16:51.238 "data_offset": 0, 00:16:51.238 "data_size": 65536 00:16:51.238 }, 00:16:51.238 { 00:16:51.238 "name": null, 00:16:51.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.238 "is_configured": false, 00:16:51.238 "data_offset": 0, 00:16:51.238 "data_size": 65536 00:16:51.238 }, 00:16:51.238 { 00:16:51.238 "name": "BaseBdev3", 00:16:51.238 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:51.238 "is_configured": true, 00:16:51.238 "data_offset": 0, 00:16:51.238 "data_size": 65536 00:16:51.238 }, 00:16:51.238 { 00:16:51.238 "name": "BaseBdev4", 00:16:51.238 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:51.238 "is_configured": true, 00:16:51.238 "data_offset": 0, 00:16:51.238 "data_size": 65536 00:16:51.238 } 00:16:51.238 ] 00:16:51.238 }' 00:16:51.238 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.238 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.238 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.497 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.497 10:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.324 89.67 IOPS, 269.00 MiB/s [2024-11-15T10:45:22.884Z] 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.324 [2024-11-15 10:45:22.846478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:52.324 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.324 "name": "raid_bdev1", 00:16:52.324 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:52.324 "strip_size_kb": 0, 00:16:52.324 "state": "online", 00:16:52.324 "raid_level": "raid1", 00:16:52.324 "superblock": false, 00:16:52.324 "num_base_bdevs": 4, 00:16:52.324 "num_base_bdevs_discovered": 3, 00:16:52.324 "num_base_bdevs_operational": 3, 00:16:52.324 "process": { 00:16:52.324 "type": "rebuild", 00:16:52.324 "target": "spare", 00:16:52.324 "progress": { 00:16:52.325 "blocks": 51200, 00:16:52.325 "percent": 78 00:16:52.325 } 00:16:52.325 }, 00:16:52.325 "base_bdevs_list": [ 00:16:52.325 { 00:16:52.325 "name": "spare", 00:16:52.325 "uuid": "0798a0f8-82f6-5c66-af95-451cd392f615", 00:16:52.325 "is_configured": true, 00:16:52.325 "data_offset": 0, 00:16:52.325 "data_size": 65536 00:16:52.325 }, 00:16:52.325 { 00:16:52.325 "name": null, 00:16:52.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.325 "is_configured": false, 00:16:52.325 "data_offset": 0, 00:16:52.325 "data_size": 65536 00:16:52.325 }, 00:16:52.325 { 00:16:52.325 "name": "BaseBdev3", 00:16:52.325 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:52.325 "is_configured": true, 00:16:52.325 "data_offset": 0, 00:16:52.325 "data_size": 65536 00:16:52.325 }, 00:16:52.325 { 00:16:52.325 "name": "BaseBdev4", 00:16:52.325 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:52.325 "is_configured": true, 00:16:52.325 "data_offset": 0, 00:16:52.325 "data_size": 65536 00:16:52.325 } 00:16:52.325 ] 00:16:52.325 }' 00:16:52.325 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.583 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.583 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.583 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.583 10:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.583 [2024-11-15 10:45:23.066079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:52.842 [2024-11-15 10:45:23.285831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:53.100 83.00 IOPS, 249.00 MiB/s [2024-11-15T10:45:23.660Z] [2024-11-15 10:45:23.624655] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:53.359 [2024-11-15 10:45:23.732395] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:53.359 [2024-11-15 10:45:23.734459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.617 10:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.617 10:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.617 10:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.617 10:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.617 10:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.617 10:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.617 10:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.617 10:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.617 10:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.617 10:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.617 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.617 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.617 "name": "raid_bdev1", 00:16:53.617 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:53.617 "strip_size_kb": 0, 00:16:53.617 "state": "online", 00:16:53.617 "raid_level": "raid1", 00:16:53.617 "superblock": false, 00:16:53.617 "num_base_bdevs": 4, 00:16:53.617 "num_base_bdevs_discovered": 3, 00:16:53.617 "num_base_bdevs_operational": 3, 00:16:53.617 "base_bdevs_list": [ 00:16:53.617 { 00:16:53.617 "name": "spare", 00:16:53.617 "uuid": "0798a0f8-82f6-5c66-af95-451cd392f615", 00:16:53.617 "is_configured": true, 00:16:53.617 "data_offset": 0, 00:16:53.617 "data_size": 65536 00:16:53.617 }, 00:16:53.617 { 00:16:53.617 "name": null, 00:16:53.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.617 "is_configured": false, 00:16:53.617 "data_offset": 0, 00:16:53.617 "data_size": 65536 00:16:53.617 }, 00:16:53.617 { 00:16:53.617 "name": "BaseBdev3", 00:16:53.617 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:53.617 "is_configured": true, 00:16:53.617 "data_offset": 0, 00:16:53.617 "data_size": 65536 00:16:53.617 }, 00:16:53.617 { 00:16:53.617 "name": "BaseBdev4", 00:16:53.617 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:53.617 "is_configured": true, 00:16:53.617 "data_offset": 0, 00:16:53.617 "data_size": 65536 00:16:53.617 } 00:16:53.617 ] 00:16:53.617 }' 00:16:53.617 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.617 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:53.617 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.617 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:53.617 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:53.617 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.618 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.618 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.618 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.618 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.618 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.618 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.618 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.618 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.618 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.877 "name": "raid_bdev1", 00:16:53.877 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:53.877 "strip_size_kb": 0, 00:16:53.877 "state": "online", 00:16:53.877 "raid_level": "raid1", 00:16:53.877 "superblock": false, 00:16:53.877 "num_base_bdevs": 4, 00:16:53.877 "num_base_bdevs_discovered": 3, 00:16:53.877 "num_base_bdevs_operational": 3, 00:16:53.877 "base_bdevs_list": [ 00:16:53.877 { 00:16:53.877 "name": "spare", 00:16:53.877 "uuid": "0798a0f8-82f6-5c66-af95-451cd392f615", 00:16:53.877 "is_configured": true, 00:16:53.877 "data_offset": 0, 00:16:53.877 "data_size": 65536 00:16:53.877 }, 00:16:53.877 { 00:16:53.877 "name": null, 00:16:53.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.877 "is_configured": false, 00:16:53.877 "data_offset": 0, 00:16:53.877 "data_size": 65536 00:16:53.877 }, 00:16:53.877 { 00:16:53.877 "name": "BaseBdev3", 00:16:53.877 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:53.877 "is_configured": true, 00:16:53.877 "data_offset": 0, 00:16:53.877 "data_size": 65536 00:16:53.877 }, 00:16:53.877 { 00:16:53.877 "name": "BaseBdev4", 00:16:53.877 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:53.877 "is_configured": true, 00:16:53.877 "data_offset": 0, 00:16:53.877 "data_size": 65536 00:16:53.877 } 00:16:53.877 ] 00:16:53.877 }' 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.877 "name": "raid_bdev1", 00:16:53.877 "uuid": "c6586159-f2ee-40bd-a4e4-80fca078690b", 00:16:53.877 "strip_size_kb": 0, 00:16:53.877 "state": "online", 00:16:53.877 "raid_level": "raid1", 00:16:53.877 "superblock": false, 00:16:53.877 "num_base_bdevs": 4, 00:16:53.877 "num_base_bdevs_discovered": 3, 00:16:53.877 "num_base_bdevs_operational": 3, 00:16:53.877 "base_bdevs_list": [ 00:16:53.877 { 00:16:53.877 "name": "spare", 00:16:53.877 "uuid": "0798a0f8-82f6-5c66-af95-451cd392f615", 00:16:53.877 "is_configured": true, 00:16:53.877 "data_offset": 0, 00:16:53.877 "data_size": 65536 00:16:53.877 }, 00:16:53.877 { 00:16:53.877 "name": null, 00:16:53.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.877 "is_configured": false, 00:16:53.877 "data_offset": 0, 00:16:53.877 "data_size": 65536 00:16:53.877 }, 00:16:53.877 { 00:16:53.877 "name": "BaseBdev3", 00:16:53.877 "uuid": "e3bf1dd4-ea88-5d33-8092-a3abd52df25f", 00:16:53.877 "is_configured": true, 00:16:53.877 "data_offset": 0, 00:16:53.877 "data_size": 65536 00:16:53.877 }, 00:16:53.877 { 00:16:53.877 "name": "BaseBdev4", 00:16:53.877 "uuid": "c8649731-92ec-5b99-88d1-5bf162938468", 00:16:53.877 "is_configured": true, 00:16:53.877 "data_offset": 0, 00:16:53.877 "data_size": 65536 00:16:53.877 } 00:16:53.877 ] 00:16:53.877 }' 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.877 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.445 77.00 IOPS, 231.00 MiB/s [2024-11-15T10:45:25.005Z] 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.445 [2024-11-15 10:45:24.812145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.445 [2024-11-15 10:45:24.812184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.445 00:16:54.445 Latency(us) 00:16:54.445 [2024-11-15T10:45:25.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.445 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:54.445 raid_bdev1 : 8.43 74.47 223.40 0.00 0.00 18272.57 297.89 118203.11 00:16:54.445 [2024-11-15T10:45:25.005Z] =================================================================================================================== 00:16:54.445 [2024-11-15T10:45:25.005Z] Total : 74.47 223.40 0.00 0.00 18272.57 297.89 118203.11 00:16:54.445 { 00:16:54.445 "results": [ 00:16:54.445 { 00:16:54.445 "job": "raid_bdev1", 00:16:54.445 "core_mask": "0x1", 00:16:54.445 "workload": "randrw", 00:16:54.445 "percentage": 50, 00:16:54.445 "status": "finished", 00:16:54.445 "queue_depth": 2, 00:16:54.445 "io_size": 3145728, 00:16:54.445 "runtime": 8.433367, 00:16:54.445 "iops": 74.46610588629666, 00:16:54.445 "mibps": 223.39831765889, 00:16:54.445 "io_failed": 0, 00:16:54.445 "io_timeout": 0, 00:16:54.445 "avg_latency_us": 18272.572924145916, 00:16:54.445 "min_latency_us": 297.8909090909091, 00:16:54.445 "max_latency_us": 118203.11272727273 00:16:54.445 } 00:16:54.445 ], 00:16:54.445 "core_count": 1 00:16:54.445 } 00:16:54.445 [2024-11-15 10:45:24.863758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.445 [2024-11-15 10:45:24.863853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.445 [2024-11-15 10:45:24.863988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.445 [2024-11-15 10:45:24.864009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:54.445 10:45:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:54.703 /dev/nbd0 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.962 1+0 records in 00:16:54.962 1+0 records out 00:16:54.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648444 s, 6.3 MB/s 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:54.962 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:55.221 /dev/nbd1 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.221 1+0 records in 00:16:55.221 1+0 records out 00:16:55.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428629 s, 9.6 MB/s 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:55.221 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:55.480 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:55.480 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.480 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:55.480 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:55.480 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:55.480 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.480 10:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:55.739 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:55.997 /dev/nbd1 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.997 1+0 records in 00:16:55.997 1+0 records out 00:16:55.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328379 s, 12.5 MB/s 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:16:55.997 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.998 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:55.998 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:56.256 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:56.256 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.256 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:56.256 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.256 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:56.256 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.256 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.515 10:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79209 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 79209 ']' 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 79209 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79209 00:16:56.774 killing process with pid 79209 00:16:56.774 Received shutdown signal, test time was about 10.805322 seconds 00:16:56.774 00:16:56.774 Latency(us) 00:16:56.774 [2024-11-15T10:45:27.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.774 [2024-11-15T10:45:27.334Z] =================================================================================================================== 00:16:56.774 [2024-11-15T10:45:27.334Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79209' 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 79209 00:16:56.774 [2024-11-15 10:45:27.215992] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.774 10:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 79209 00:16:57.033 [2024-11-15 10:45:27.576576] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:58.437 00:16:58.437 real 0m14.319s 00:16:58.437 user 0m19.045s 00:16:58.437 sys 0m1.614s 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:58.437 ************************************ 00:16:58.437 END TEST raid_rebuild_test_io 00:16:58.437 ************************************ 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.437 10:45:28 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:58.437 10:45:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:58.437 10:45:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:58.437 10:45:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:58.437 ************************************ 00:16:58.437 START TEST raid_rebuild_test_sb_io 00:16:58.437 ************************************ 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79630 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79630 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79630 ']' 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:58.437 10:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.437 [2024-11-15 10:45:28.771322] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:16:58.437 [2024-11-15 10:45:28.771680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79630 ] 00:16:58.437 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:58.437 Zero copy mechanism will not be used. 00:16:58.437 [2024-11-15 10:45:28.945991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.696 [2024-11-15 10:45:29.050890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.697 [2024-11-15 10:45:29.233495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.697 [2024-11-15 10:45:29.233750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.264 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:59.264 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:16:59.264 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:59.264 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:59.264 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.264 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 BaseBdev1_malloc 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 [2024-11-15 10:45:29.854636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:59.524 [2024-11-15 10:45:29.854714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.524 [2024-11-15 10:45:29.854745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:59.524 [2024-11-15 10:45:29.854763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.524 [2024-11-15 10:45:29.857323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.524 [2024-11-15 10:45:29.857535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:59.524 BaseBdev1 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 BaseBdev2_malloc 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 [2024-11-15 10:45:29.906532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:59.524 [2024-11-15 10:45:29.906615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.524 [2024-11-15 10:45:29.906646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:59.524 [2024-11-15 10:45:29.906664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.524 [2024-11-15 10:45:29.909224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.524 [2024-11-15 10:45:29.909274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:59.524 BaseBdev2 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 BaseBdev3_malloc 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 [2024-11-15 10:45:29.963102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:59.524 [2024-11-15 10:45:29.963175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.524 [2024-11-15 10:45:29.963206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:59.524 [2024-11-15 10:45:29.963224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.524 [2024-11-15 10:45:29.965777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.524 [2024-11-15 10:45:29.965829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:59.524 BaseBdev3 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 BaseBdev4_malloc 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 [2024-11-15 10:45:30.010705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:59.524 [2024-11-15 10:45:30.010783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.524 [2024-11-15 10:45:30.010813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:59.524 [2024-11-15 10:45:30.010831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.524 [2024-11-15 10:45:30.013444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.524 [2024-11-15 10:45:30.013497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:59.524 BaseBdev4 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 spare_malloc 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 spare_delay 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 [2024-11-15 10:45:30.066513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:59.524 [2024-11-15 10:45:30.066580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.524 [2024-11-15 10:45:30.066607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:59.524 [2024-11-15 10:45:30.066624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.524 [2024-11-15 10:45:30.069228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.524 [2024-11-15 10:45:30.069280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:59.524 spare 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.524 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.524 [2024-11-15 10:45:30.078577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.783 [2024-11-15 10:45:30.080842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.783 [2024-11-15 10:45:30.080937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.783 [2024-11-15 10:45:30.081020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:59.783 [2024-11-15 10:45:30.081270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:59.783 [2024-11-15 10:45:30.081293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:59.783 [2024-11-15 10:45:30.081630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:59.783 [2024-11-15 10:45:30.081858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:59.783 [2024-11-15 10:45:30.081875] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:59.783 [2024-11-15 10:45:30.082065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.784 "name": "raid_bdev1", 00:16:59.784 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:16:59.784 "strip_size_kb": 0, 00:16:59.784 "state": "online", 00:16:59.784 "raid_level": "raid1", 00:16:59.784 "superblock": true, 00:16:59.784 "num_base_bdevs": 4, 00:16:59.784 "num_base_bdevs_discovered": 4, 00:16:59.784 "num_base_bdevs_operational": 4, 00:16:59.784 "base_bdevs_list": [ 00:16:59.784 { 00:16:59.784 "name": "BaseBdev1", 00:16:59.784 "uuid": "2dcf4476-0d51-5908-8209-dd30d9fecae5", 00:16:59.784 "is_configured": true, 00:16:59.784 "data_offset": 2048, 00:16:59.784 "data_size": 63488 00:16:59.784 }, 00:16:59.784 { 00:16:59.784 "name": "BaseBdev2", 00:16:59.784 "uuid": "ece36ebe-f041-5478-8048-dcd16d18a19d", 00:16:59.784 "is_configured": true, 00:16:59.784 "data_offset": 2048, 00:16:59.784 "data_size": 63488 00:16:59.784 }, 00:16:59.784 { 00:16:59.784 "name": "BaseBdev3", 00:16:59.784 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:16:59.784 "is_configured": true, 00:16:59.784 "data_offset": 2048, 00:16:59.784 "data_size": 63488 00:16:59.784 }, 00:16:59.784 { 00:16:59.784 "name": "BaseBdev4", 00:16:59.784 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:16:59.784 "is_configured": true, 00:16:59.784 "data_offset": 2048, 00:16:59.784 "data_size": 63488 00:16:59.784 } 00:16:59.784 ] 00:16:59.784 }' 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.784 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.352 [2024-11-15 10:45:30.611124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.352 [2024-11-15 10:45:30.718695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.352 "name": "raid_bdev1", 00:17:00.352 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:00.352 "strip_size_kb": 0, 00:17:00.352 "state": "online", 00:17:00.352 "raid_level": "raid1", 00:17:00.352 "superblock": true, 00:17:00.352 "num_base_bdevs": 4, 00:17:00.352 "num_base_bdevs_discovered": 3, 00:17:00.352 "num_base_bdevs_operational": 3, 00:17:00.352 "base_bdevs_list": [ 00:17:00.352 { 00:17:00.352 "name": null, 00:17:00.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.352 "is_configured": false, 00:17:00.352 "data_offset": 0, 00:17:00.352 "data_size": 63488 00:17:00.352 }, 00:17:00.352 { 00:17:00.352 "name": "BaseBdev2", 00:17:00.352 "uuid": "ece36ebe-f041-5478-8048-dcd16d18a19d", 00:17:00.352 "is_configured": true, 00:17:00.352 "data_offset": 2048, 00:17:00.352 "data_size": 63488 00:17:00.352 }, 00:17:00.352 { 00:17:00.352 "name": "BaseBdev3", 00:17:00.352 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:00.352 "is_configured": true, 00:17:00.352 "data_offset": 2048, 00:17:00.352 "data_size": 63488 00:17:00.352 }, 00:17:00.352 { 00:17:00.352 "name": "BaseBdev4", 00:17:00.352 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:00.352 "is_configured": true, 00:17:00.352 "data_offset": 2048, 00:17:00.352 "data_size": 63488 00:17:00.352 } 00:17:00.352 ] 00:17:00.352 }' 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.352 10:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.352 [2024-11-15 10:45:30.845754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:00.352 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:00.352 Zero copy mechanism will not be used. 00:17:00.352 Running I/O for 60 seconds... 00:17:00.947 10:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:00.947 10:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.947 10:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.947 [2024-11-15 10:45:31.270093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.947 10:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.947 10:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:00.947 [2024-11-15 10:45:31.351538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:00.947 [2024-11-15 10:45:31.354176] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:00.947 [2024-11-15 10:45:31.473837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:00.947 [2024-11-15 10:45:31.475187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:01.205 [2024-11-15 10:45:31.697524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:01.205 [2024-11-15 10:45:31.698114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:01.719 126.00 IOPS, 378.00 MiB/s [2024-11-15T10:45:32.279Z] [2024-11-15 10:45:32.058249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:01.978 [2024-11-15 10:45:32.280422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.978 "name": "raid_bdev1", 00:17:01.978 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:01.978 "strip_size_kb": 0, 00:17:01.978 "state": "online", 00:17:01.978 "raid_level": "raid1", 00:17:01.978 "superblock": true, 00:17:01.978 "num_base_bdevs": 4, 00:17:01.978 "num_base_bdevs_discovered": 4, 00:17:01.978 "num_base_bdevs_operational": 4, 00:17:01.978 "process": { 00:17:01.978 "type": "rebuild", 00:17:01.978 "target": "spare", 00:17:01.978 "progress": { 00:17:01.978 "blocks": 10240, 00:17:01.978 "percent": 16 00:17:01.978 } 00:17:01.978 }, 00:17:01.978 "base_bdevs_list": [ 00:17:01.978 { 00:17:01.978 "name": "spare", 00:17:01.978 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:01.978 "is_configured": true, 00:17:01.978 "data_offset": 2048, 00:17:01.978 "data_size": 63488 00:17:01.978 }, 00:17:01.978 { 00:17:01.978 "name": "BaseBdev2", 00:17:01.978 "uuid": "ece36ebe-f041-5478-8048-dcd16d18a19d", 00:17:01.978 "is_configured": true, 00:17:01.978 "data_offset": 2048, 00:17:01.978 "data_size": 63488 00:17:01.978 }, 00:17:01.978 { 00:17:01.978 "name": "BaseBdev3", 00:17:01.978 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:01.978 "is_configured": true, 00:17:01.978 "data_offset": 2048, 00:17:01.978 "data_size": 63488 00:17:01.978 }, 00:17:01.978 { 00:17:01.978 "name": "BaseBdev4", 00:17:01.978 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:01.978 "is_configured": true, 00:17:01.978 "data_offset": 2048, 00:17:01.978 "data_size": 63488 00:17:01.978 } 00:17:01.978 ] 00:17:01.978 }' 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.978 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.978 [2024-11-15 10:45:32.483827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.978 [2024-11-15 10:45:32.506641] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:01.978 [2024-11-15 10:45:32.526877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.978 [2024-11-15 10:45:32.526970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.978 [2024-11-15 10:45:32.526996] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:02.237 [2024-11-15 10:45:32.556815] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.237 "name": "raid_bdev1", 00:17:02.237 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:02.237 "strip_size_kb": 0, 00:17:02.237 "state": "online", 00:17:02.237 "raid_level": "raid1", 00:17:02.237 "superblock": true, 00:17:02.237 "num_base_bdevs": 4, 00:17:02.237 "num_base_bdevs_discovered": 3, 00:17:02.237 "num_base_bdevs_operational": 3, 00:17:02.237 "base_bdevs_list": [ 00:17:02.237 { 00:17:02.237 "name": null, 00:17:02.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.237 "is_configured": false, 00:17:02.237 "data_offset": 0, 00:17:02.237 "data_size": 63488 00:17:02.237 }, 00:17:02.237 { 00:17:02.237 "name": "BaseBdev2", 00:17:02.237 "uuid": "ece36ebe-f041-5478-8048-dcd16d18a19d", 00:17:02.237 "is_configured": true, 00:17:02.237 "data_offset": 2048, 00:17:02.237 "data_size": 63488 00:17:02.237 }, 00:17:02.237 { 00:17:02.237 "name": "BaseBdev3", 00:17:02.237 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:02.237 "is_configured": true, 00:17:02.237 "data_offset": 2048, 00:17:02.237 "data_size": 63488 00:17:02.237 }, 00:17:02.237 { 00:17:02.237 "name": "BaseBdev4", 00:17:02.237 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:02.237 "is_configured": true, 00:17:02.237 "data_offset": 2048, 00:17:02.237 "data_size": 63488 00:17:02.237 } 00:17:02.237 ] 00:17:02.237 }' 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.237 10:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 133.50 IOPS, 400.50 MiB/s [2024-11-15T10:45:33.312Z] 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.752 "name": "raid_bdev1", 00:17:02.752 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:02.752 "strip_size_kb": 0, 00:17:02.752 "state": "online", 00:17:02.752 "raid_level": "raid1", 00:17:02.752 "superblock": true, 00:17:02.752 "num_base_bdevs": 4, 00:17:02.752 "num_base_bdevs_discovered": 3, 00:17:02.752 "num_base_bdevs_operational": 3, 00:17:02.752 "base_bdevs_list": [ 00:17:02.752 { 00:17:02.752 "name": null, 00:17:02.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.752 "is_configured": false, 00:17:02.752 "data_offset": 0, 00:17:02.752 "data_size": 63488 00:17:02.752 }, 00:17:02.752 { 00:17:02.752 "name": "BaseBdev2", 00:17:02.752 "uuid": "ece36ebe-f041-5478-8048-dcd16d18a19d", 00:17:02.752 "is_configured": true, 00:17:02.752 "data_offset": 2048, 00:17:02.752 "data_size": 63488 00:17:02.752 }, 00:17:02.752 { 00:17:02.752 "name": "BaseBdev3", 00:17:02.752 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:02.752 "is_configured": true, 00:17:02.752 "data_offset": 2048, 00:17:02.752 "data_size": 63488 00:17:02.752 }, 00:17:02.752 { 00:17:02.752 "name": "BaseBdev4", 00:17:02.752 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:02.752 "is_configured": true, 00:17:02.752 "data_offset": 2048, 00:17:02.752 "data_size": 63488 00:17:02.752 } 00:17:02.752 ] 00:17:02.752 }' 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.752 [2024-11-15 10:45:33.233437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.752 10:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:03.010 [2024-11-15 10:45:33.313289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:03.010 [2024-11-15 10:45:33.315799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.010 [2024-11-15 10:45:33.453715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:03.010 [2024-11-15 10:45:33.454227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:03.268 [2024-11-15 10:45:33.676485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:03.268 [2024-11-15 10:45:33.676778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:03.527 137.00 IOPS, 411.00 MiB/s [2024-11-15T10:45:34.087Z] [2024-11-15 10:45:34.020707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:03.842 [2024-11-15 10:45:34.250988] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:03.842 [2024-11-15 10:45:34.251637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.842 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.842 "name": "raid_bdev1", 00:17:03.842 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:03.842 "strip_size_kb": 0, 00:17:03.842 "state": "online", 00:17:03.842 "raid_level": "raid1", 00:17:03.842 "superblock": true, 00:17:03.842 "num_base_bdevs": 4, 00:17:03.842 "num_base_bdevs_discovered": 4, 00:17:03.842 "num_base_bdevs_operational": 4, 00:17:03.842 "process": { 00:17:03.842 "type": "rebuild", 00:17:03.842 "target": "spare", 00:17:03.842 "progress": { 00:17:03.842 "blocks": 10240, 00:17:03.842 "percent": 16 00:17:03.842 } 00:17:03.842 }, 00:17:03.842 "base_bdevs_list": [ 00:17:03.842 { 00:17:03.842 "name": "spare", 00:17:03.842 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:03.842 "is_configured": true, 00:17:03.842 "data_offset": 2048, 00:17:03.842 "data_size": 63488 00:17:03.842 }, 00:17:03.842 { 00:17:03.843 "name": "BaseBdev2", 00:17:03.843 "uuid": "ece36ebe-f041-5478-8048-dcd16d18a19d", 00:17:03.843 "is_configured": true, 00:17:03.843 "data_offset": 2048, 00:17:03.843 "data_size": 63488 00:17:03.843 }, 00:17:03.843 { 00:17:03.843 "name": "BaseBdev3", 00:17:03.843 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:03.843 "is_configured": true, 00:17:03.843 "data_offset": 2048, 00:17:03.843 "data_size": 63488 00:17:03.843 }, 00:17:03.843 { 00:17:03.843 "name": "BaseBdev4", 00:17:03.843 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:03.843 "is_configured": true, 00:17:03.843 "data_offset": 2048, 00:17:03.843 "data_size": 63488 00:17:03.843 } 00:17:03.843 ] 00:17:03.843 }' 00:17:03.843 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:04.109 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.109 [2024-11-15 10:45:34.471338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.109 [2024-11-15 10:45:34.614741] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:04.109 [2024-11-15 10:45:34.614819] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.109 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.374 "name": "raid_bdev1", 00:17:04.374 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:04.374 "strip_size_kb": 0, 00:17:04.374 "state": "online", 00:17:04.374 "raid_level": "raid1", 00:17:04.374 "superblock": true, 00:17:04.374 "num_base_bdevs": 4, 00:17:04.374 "num_base_bdevs_discovered": 3, 00:17:04.374 "num_base_bdevs_operational": 3, 00:17:04.374 "process": { 00:17:04.374 "type": "rebuild", 00:17:04.374 "target": "spare", 00:17:04.374 "progress": { 00:17:04.374 "blocks": 12288, 00:17:04.374 "percent": 19 00:17:04.374 } 00:17:04.374 }, 00:17:04.374 "base_bdevs_list": [ 00:17:04.374 { 00:17:04.374 "name": "spare", 00:17:04.374 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:04.374 "is_configured": true, 00:17:04.374 "data_offset": 2048, 00:17:04.374 "data_size": 63488 00:17:04.374 }, 00:17:04.374 { 00:17:04.374 "name": null, 00:17:04.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.374 "is_configured": false, 00:17:04.374 "data_offset": 0, 00:17:04.374 "data_size": 63488 00:17:04.374 }, 00:17:04.374 { 00:17:04.374 "name": "BaseBdev3", 00:17:04.374 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:04.374 "is_configured": true, 00:17:04.374 "data_offset": 2048, 00:17:04.374 "data_size": 63488 00:17:04.374 }, 00:17:04.374 { 00:17:04.374 "name": "BaseBdev4", 00:17:04.374 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:04.374 "is_configured": true, 00:17:04.374 "data_offset": 2048, 00:17:04.374 "data_size": 63488 00:17:04.374 } 00:17:04.374 ] 00:17:04.374 }' 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.374 [2024-11-15 10:45:34.770389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=528 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.374 "name": "raid_bdev1", 00:17:04.374 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:04.374 "strip_size_kb": 0, 00:17:04.374 "state": "online", 00:17:04.374 "raid_level": "raid1", 00:17:04.374 "superblock": true, 00:17:04.374 "num_base_bdevs": 4, 00:17:04.374 "num_base_bdevs_discovered": 3, 00:17:04.374 "num_base_bdevs_operational": 3, 00:17:04.374 "process": { 00:17:04.374 "type": "rebuild", 00:17:04.374 "target": "spare", 00:17:04.374 "progress": { 00:17:04.374 "blocks": 14336, 00:17:04.374 "percent": 22 00:17:04.374 } 00:17:04.374 }, 00:17:04.374 "base_bdevs_list": [ 00:17:04.374 { 00:17:04.374 "name": "spare", 00:17:04.374 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:04.374 "is_configured": true, 00:17:04.374 "data_offset": 2048, 00:17:04.374 "data_size": 63488 00:17:04.374 }, 00:17:04.374 { 00:17:04.374 "name": null, 00:17:04.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.374 "is_configured": false, 00:17:04.374 "data_offset": 0, 00:17:04.374 "data_size": 63488 00:17:04.374 }, 00:17:04.374 { 00:17:04.374 "name": "BaseBdev3", 00:17:04.374 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:04.374 "is_configured": true, 00:17:04.374 "data_offset": 2048, 00:17:04.374 "data_size": 63488 00:17:04.374 }, 00:17:04.374 { 00:17:04.374 "name": "BaseBdev4", 00:17:04.374 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:04.374 "is_configured": true, 00:17:04.374 "data_offset": 2048, 00:17:04.374 "data_size": 63488 00:17:04.374 } 00:17:04.374 ] 00:17:04.374 }' 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.374 116.50 IOPS, 349.50 MiB/s [2024-11-15T10:45:34.934Z] 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.374 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.633 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.633 10:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.633 [2024-11-15 10:45:34.989927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:04.633 [2024-11-15 10:45:34.990220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:04.892 [2024-11-15 10:45:35.330840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:05.459 [2024-11-15 10:45:35.787882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:05.459 102.00 IOPS, 306.00 MiB/s [2024-11-15T10:45:36.019Z] 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.459 "name": "raid_bdev1", 00:17:05.459 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:05.459 "strip_size_kb": 0, 00:17:05.459 "state": "online", 00:17:05.459 "raid_level": "raid1", 00:17:05.459 "superblock": true, 00:17:05.459 "num_base_bdevs": 4, 00:17:05.459 "num_base_bdevs_discovered": 3, 00:17:05.459 "num_base_bdevs_operational": 3, 00:17:05.459 "process": { 00:17:05.459 "type": "rebuild", 00:17:05.459 "target": "spare", 00:17:05.459 "progress": { 00:17:05.459 "blocks": 26624, 00:17:05.459 "percent": 41 00:17:05.459 } 00:17:05.459 }, 00:17:05.459 "base_bdevs_list": [ 00:17:05.459 { 00:17:05.459 "name": "spare", 00:17:05.459 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:05.459 "is_configured": true, 00:17:05.459 "data_offset": 2048, 00:17:05.459 "data_size": 63488 00:17:05.459 }, 00:17:05.459 { 00:17:05.459 "name": null, 00:17:05.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.459 "is_configured": false, 00:17:05.459 "data_offset": 0, 00:17:05.459 "data_size": 63488 00:17:05.459 }, 00:17:05.459 { 00:17:05.459 "name": "BaseBdev3", 00:17:05.459 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:05.459 "is_configured": true, 00:17:05.459 "data_offset": 2048, 00:17:05.459 "data_size": 63488 00:17:05.459 }, 00:17:05.459 { 00:17:05.459 "name": "BaseBdev4", 00:17:05.459 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:05.459 "is_configured": true, 00:17:05.459 "data_offset": 2048, 00:17:05.459 "data_size": 63488 00:17:05.459 } 00:17:05.459 ] 00:17:05.459 }' 00:17:05.459 10:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.718 [2024-11-15 10:45:36.024994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:05.718 [2024-11-15 10:45:36.025261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:05.719 10:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.719 10:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.719 10:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.719 10:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.286 [2024-11-15 10:45:36.630553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:06.545 92.50 IOPS, 277.50 MiB/s [2024-11-15T10:45:37.105Z] [2024-11-15 10:45:36.977631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:06.545 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.545 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.545 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.545 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.545 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.545 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.804 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.804 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.804 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.804 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.804 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.804 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.804 "name": "raid_bdev1", 00:17:06.804 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:06.804 "strip_size_kb": 0, 00:17:06.804 "state": "online", 00:17:06.804 "raid_level": "raid1", 00:17:06.804 "superblock": true, 00:17:06.804 "num_base_bdevs": 4, 00:17:06.804 "num_base_bdevs_discovered": 3, 00:17:06.804 "num_base_bdevs_operational": 3, 00:17:06.804 "process": { 00:17:06.804 "type": "rebuild", 00:17:06.804 "target": "spare", 00:17:06.804 "progress": { 00:17:06.804 "blocks": 45056, 00:17:06.804 "percent": 70 00:17:06.804 } 00:17:06.804 }, 00:17:06.804 "base_bdevs_list": [ 00:17:06.804 { 00:17:06.804 "name": "spare", 00:17:06.804 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:06.804 "is_configured": true, 00:17:06.804 "data_offset": 2048, 00:17:06.804 "data_size": 63488 00:17:06.804 }, 00:17:06.804 { 00:17:06.804 "name": null, 00:17:06.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.804 "is_configured": false, 00:17:06.804 "data_offset": 0, 00:17:06.804 "data_size": 63488 00:17:06.804 }, 00:17:06.804 { 00:17:06.804 "name": "BaseBdev3", 00:17:06.804 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:06.804 "is_configured": true, 00:17:06.804 "data_offset": 2048, 00:17:06.804 "data_size": 63488 00:17:06.804 }, 00:17:06.804 { 00:17:06.804 "name": "BaseBdev4", 00:17:06.804 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:06.804 "is_configured": true, 00:17:06.805 "data_offset": 2048, 00:17:06.805 "data_size": 63488 00:17:06.805 } 00:17:06.805 ] 00:17:06.805 }' 00:17:06.805 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.805 [2024-11-15 10:45:37.213739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:06.805 [2024-11-15 10:45:37.214015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:06.805 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.805 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.805 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.805 10:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.371 [2024-11-15 10:45:37.688822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:07.939 83.71 IOPS, 251.14 MiB/s [2024-11-15T10:45:38.499Z] [2024-11-15 10:45:38.256179] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.939 "name": "raid_bdev1", 00:17:07.939 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:07.939 "strip_size_kb": 0, 00:17:07.939 "state": "online", 00:17:07.939 "raid_level": "raid1", 00:17:07.939 "superblock": true, 00:17:07.939 "num_base_bdevs": 4, 00:17:07.939 "num_base_bdevs_discovered": 3, 00:17:07.939 "num_base_bdevs_operational": 3, 00:17:07.939 "process": { 00:17:07.939 "type": "rebuild", 00:17:07.939 "target": "spare", 00:17:07.939 "progress": { 00:17:07.939 "blocks": 63488, 00:17:07.939 "percent": 100 00:17:07.939 } 00:17:07.939 }, 00:17:07.939 "base_bdevs_list": [ 00:17:07.939 { 00:17:07.939 "name": "spare", 00:17:07.939 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:07.939 "is_configured": true, 00:17:07.939 "data_offset": 2048, 00:17:07.939 "data_size": 63488 00:17:07.939 }, 00:17:07.939 { 00:17:07.939 "name": null, 00:17:07.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.939 "is_configured": false, 00:17:07.939 "data_offset": 0, 00:17:07.939 "data_size": 63488 00:17:07.939 }, 00:17:07.939 { 00:17:07.939 "name": "BaseBdev3", 00:17:07.939 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:07.939 "is_configured": true, 00:17:07.939 "data_offset": 2048, 00:17:07.939 "data_size": 63488 00:17:07.939 }, 00:17:07.939 { 00:17:07.939 "name": "BaseBdev4", 00:17:07.939 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:07.939 "is_configured": true, 00:17:07.939 "data_offset": 2048, 00:17:07.939 "data_size": 63488 00:17:07.939 } 00:17:07.939 ] 00:17:07.939 }' 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.939 [2024-11-15 10:45:38.356162] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:07.939 [2024-11-15 10:45:38.366842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.939 10:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.072 79.00 IOPS, 237.00 MiB/s [2024-11-15T10:45:39.632Z] 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.072 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.072 "name": "raid_bdev1", 00:17:09.072 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:09.072 "strip_size_kb": 0, 00:17:09.072 "state": "online", 00:17:09.072 "raid_level": "raid1", 00:17:09.072 "superblock": true, 00:17:09.072 "num_base_bdevs": 4, 00:17:09.072 "num_base_bdevs_discovered": 3, 00:17:09.072 "num_base_bdevs_operational": 3, 00:17:09.072 "base_bdevs_list": [ 00:17:09.072 { 00:17:09.072 "name": "spare", 00:17:09.072 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:09.072 "is_configured": true, 00:17:09.072 "data_offset": 2048, 00:17:09.072 "data_size": 63488 00:17:09.072 }, 00:17:09.072 { 00:17:09.072 "name": null, 00:17:09.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.072 "is_configured": false, 00:17:09.072 "data_offset": 0, 00:17:09.072 "data_size": 63488 00:17:09.072 }, 00:17:09.072 { 00:17:09.072 "name": "BaseBdev3", 00:17:09.072 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:09.072 "is_configured": true, 00:17:09.072 "data_offset": 2048, 00:17:09.072 "data_size": 63488 00:17:09.072 }, 00:17:09.072 { 00:17:09.072 "name": "BaseBdev4", 00:17:09.073 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:09.073 "is_configured": true, 00:17:09.073 "data_offset": 2048, 00:17:09.073 "data_size": 63488 00:17:09.073 } 00:17:09.073 ] 00:17:09.073 }' 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.073 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.331 "name": "raid_bdev1", 00:17:09.331 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:09.331 "strip_size_kb": 0, 00:17:09.331 "state": "online", 00:17:09.331 "raid_level": "raid1", 00:17:09.331 "superblock": true, 00:17:09.331 "num_base_bdevs": 4, 00:17:09.331 "num_base_bdevs_discovered": 3, 00:17:09.331 "num_base_bdevs_operational": 3, 00:17:09.331 "base_bdevs_list": [ 00:17:09.331 { 00:17:09.331 "name": "spare", 00:17:09.331 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:09.331 "is_configured": true, 00:17:09.331 "data_offset": 2048, 00:17:09.331 "data_size": 63488 00:17:09.331 }, 00:17:09.331 { 00:17:09.331 "name": null, 00:17:09.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.331 "is_configured": false, 00:17:09.331 "data_offset": 0, 00:17:09.331 "data_size": 63488 00:17:09.331 }, 00:17:09.331 { 00:17:09.331 "name": "BaseBdev3", 00:17:09.331 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:09.331 "is_configured": true, 00:17:09.331 "data_offset": 2048, 00:17:09.331 "data_size": 63488 00:17:09.331 }, 00:17:09.331 { 00:17:09.331 "name": "BaseBdev4", 00:17:09.331 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:09.331 "is_configured": true, 00:17:09.331 "data_offset": 2048, 00:17:09.331 "data_size": 63488 00:17:09.331 } 00:17:09.331 ] 00:17:09.331 }' 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.331 "name": "raid_bdev1", 00:17:09.331 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:09.331 "strip_size_kb": 0, 00:17:09.331 "state": "online", 00:17:09.331 "raid_level": "raid1", 00:17:09.331 "superblock": true, 00:17:09.331 "num_base_bdevs": 4, 00:17:09.331 "num_base_bdevs_discovered": 3, 00:17:09.331 "num_base_bdevs_operational": 3, 00:17:09.331 "base_bdevs_list": [ 00:17:09.331 { 00:17:09.331 "name": "spare", 00:17:09.331 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:09.331 "is_configured": true, 00:17:09.331 "data_offset": 2048, 00:17:09.331 "data_size": 63488 00:17:09.331 }, 00:17:09.331 { 00:17:09.331 "name": null, 00:17:09.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.331 "is_configured": false, 00:17:09.331 "data_offset": 0, 00:17:09.331 "data_size": 63488 00:17:09.331 }, 00:17:09.331 { 00:17:09.331 "name": "BaseBdev3", 00:17:09.331 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:09.331 "is_configured": true, 00:17:09.331 "data_offset": 2048, 00:17:09.331 "data_size": 63488 00:17:09.331 }, 00:17:09.331 { 00:17:09.331 "name": "BaseBdev4", 00:17:09.331 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:09.331 "is_configured": true, 00:17:09.331 "data_offset": 2048, 00:17:09.331 "data_size": 63488 00:17:09.331 } 00:17:09.331 ] 00:17:09.331 }' 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.331 10:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.898 74.78 IOPS, 224.33 MiB/s [2024-11-15T10:45:40.458Z] 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.898 [2024-11-15 10:45:40.267765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.898 [2024-11-15 10:45:40.267928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.898 00:17:09.898 Latency(us) 00:17:09.898 [2024-11-15T10:45:40.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.898 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:09.898 raid_bdev1 : 9.51 72.23 216.69 0.00 0.00 19328.71 288.58 112960.23 00:17:09.898 [2024-11-15T10:45:40.458Z] =================================================================================================================== 00:17:09.898 [2024-11-15T10:45:40.458Z] Total : 72.23 216.69 0.00 0.00 19328.71 288.58 112960.23 00:17:09.898 [2024-11-15 10:45:40.379596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.898 [2024-11-15 10:45:40.379818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.898 [2024-11-15 10:45:40.379996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.898 [2024-11-15 10:45:40.380141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:09.898 { 00:17:09.898 "results": [ 00:17:09.898 { 00:17:09.898 "job": "raid_bdev1", 00:17:09.898 "core_mask": "0x1", 00:17:09.898 "workload": "randrw", 00:17:09.898 "percentage": 50, 00:17:09.898 "status": "finished", 00:17:09.898 "queue_depth": 2, 00:17:09.898 "io_size": 3145728, 00:17:09.898 "runtime": 9.511274, 00:17:09.898 "iops": 72.23007138686152, 00:17:09.898 "mibps": 216.69021416058456, 00:17:09.898 "io_failed": 0, 00:17:09.898 "io_timeout": 0, 00:17:09.898 "avg_latency_us": 19328.714780997747, 00:17:09.898 "min_latency_us": 288.58181818181816, 00:17:09.898 "max_latency_us": 112960.23272727273 00:17:09.898 } 00:17:09.898 ], 00:17:09.898 "core_count": 1 00:17:09.898 } 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:09.898 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:10.464 /dev/nbd0 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.464 1+0 records in 00:17:10.464 1+0 records out 00:17:10.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383434 s, 10.7 MB/s 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.464 10:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:10.722 /dev/nbd1 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.722 1+0 records in 00:17:10.722 1+0 records out 00:17:10.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052758 s, 7.8 MB/s 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:17:10.722 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.723 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.723 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:10.981 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:10.981 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.981 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:10.981 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:10.981 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:10.981 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:10.981 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.241 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:11.499 /dev/nbd1 00:17:11.499 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:11.499 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:11.499 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:11.499 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:17:11.499 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:11.499 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:11.499 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:11.499 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:17:11.499 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:11.499 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:11.500 10:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.500 1+0 records in 00:17:11.500 1+0 records out 00:17:11.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314793 s, 13.0 MB/s 00:17:11.500 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.500 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:17:11.500 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.500 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:11.500 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:17:11.500 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.500 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.500 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:11.758 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:11.758 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.758 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:11.758 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:11.758 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:11.758 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.758 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.017 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.277 [2024-11-15 10:45:42.740633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:12.277 [2024-11-15 10:45:42.740696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.277 [2024-11-15 10:45:42.740738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:12.277 [2024-11-15 10:45:42.740754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.277 [2024-11-15 10:45:42.743474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.277 [2024-11-15 10:45:42.743521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:12.277 [2024-11-15 10:45:42.743633] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:12.277 [2024-11-15 10:45:42.743694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.277 [2024-11-15 10:45:42.743871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.277 [2024-11-15 10:45:42.744009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:12.277 spare 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.277 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.536 [2024-11-15 10:45:42.844145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:12.536 [2024-11-15 10:45:42.844202] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:12.536 [2024-11-15 10:45:42.844676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:17:12.536 [2024-11-15 10:45:42.844924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:12.536 [2024-11-15 10:45:42.844948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:12.536 [2024-11-15 10:45:42.845200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.536 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.536 "name": "raid_bdev1", 00:17:12.536 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:12.536 "strip_size_kb": 0, 00:17:12.536 "state": "online", 00:17:12.536 "raid_level": "raid1", 00:17:12.536 "superblock": true, 00:17:12.536 "num_base_bdevs": 4, 00:17:12.536 "num_base_bdevs_discovered": 3, 00:17:12.536 "num_base_bdevs_operational": 3, 00:17:12.536 "base_bdevs_list": [ 00:17:12.536 { 00:17:12.536 "name": "spare", 00:17:12.536 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:12.536 "is_configured": true, 00:17:12.536 "data_offset": 2048, 00:17:12.536 "data_size": 63488 00:17:12.536 }, 00:17:12.536 { 00:17:12.536 "name": null, 00:17:12.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.537 "is_configured": false, 00:17:12.537 "data_offset": 2048, 00:17:12.537 "data_size": 63488 00:17:12.537 }, 00:17:12.537 { 00:17:12.537 "name": "BaseBdev3", 00:17:12.537 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:12.537 "is_configured": true, 00:17:12.537 "data_offset": 2048, 00:17:12.537 "data_size": 63488 00:17:12.537 }, 00:17:12.537 { 00:17:12.537 "name": "BaseBdev4", 00:17:12.537 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:12.537 "is_configured": true, 00:17:12.537 "data_offset": 2048, 00:17:12.537 "data_size": 63488 00:17:12.537 } 00:17:12.537 ] 00:17:12.537 }' 00:17:12.537 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.537 10:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.796 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.796 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.796 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.796 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.796 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.054 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.054 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.054 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.054 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.054 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.054 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.054 "name": "raid_bdev1", 00:17:13.054 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:13.054 "strip_size_kb": 0, 00:17:13.055 "state": "online", 00:17:13.055 "raid_level": "raid1", 00:17:13.055 "superblock": true, 00:17:13.055 "num_base_bdevs": 4, 00:17:13.055 "num_base_bdevs_discovered": 3, 00:17:13.055 "num_base_bdevs_operational": 3, 00:17:13.055 "base_bdevs_list": [ 00:17:13.055 { 00:17:13.055 "name": "spare", 00:17:13.055 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:13.055 "is_configured": true, 00:17:13.055 "data_offset": 2048, 00:17:13.055 "data_size": 63488 00:17:13.055 }, 00:17:13.055 { 00:17:13.055 "name": null, 00:17:13.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.055 "is_configured": false, 00:17:13.055 "data_offset": 2048, 00:17:13.055 "data_size": 63488 00:17:13.055 }, 00:17:13.055 { 00:17:13.055 "name": "BaseBdev3", 00:17:13.055 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:13.055 "is_configured": true, 00:17:13.055 "data_offset": 2048, 00:17:13.055 "data_size": 63488 00:17:13.055 }, 00:17:13.055 { 00:17:13.055 "name": "BaseBdev4", 00:17:13.055 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:13.055 "is_configured": true, 00:17:13.055 "data_offset": 2048, 00:17:13.055 "data_size": 63488 00:17:13.055 } 00:17:13.055 ] 00:17:13.055 }' 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.055 [2024-11-15 10:45:43.581464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.055 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.315 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.315 "name": "raid_bdev1", 00:17:13.315 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:13.315 "strip_size_kb": 0, 00:17:13.315 "state": "online", 00:17:13.315 "raid_level": "raid1", 00:17:13.315 "superblock": true, 00:17:13.315 "num_base_bdevs": 4, 00:17:13.315 "num_base_bdevs_discovered": 2, 00:17:13.315 "num_base_bdevs_operational": 2, 00:17:13.315 "base_bdevs_list": [ 00:17:13.315 { 00:17:13.315 "name": null, 00:17:13.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.315 "is_configured": false, 00:17:13.315 "data_offset": 0, 00:17:13.315 "data_size": 63488 00:17:13.315 }, 00:17:13.315 { 00:17:13.315 "name": null, 00:17:13.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.315 "is_configured": false, 00:17:13.315 "data_offset": 2048, 00:17:13.315 "data_size": 63488 00:17:13.315 }, 00:17:13.315 { 00:17:13.315 "name": "BaseBdev3", 00:17:13.315 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:13.315 "is_configured": true, 00:17:13.315 "data_offset": 2048, 00:17:13.315 "data_size": 63488 00:17:13.315 }, 00:17:13.315 { 00:17:13.315 "name": "BaseBdev4", 00:17:13.315 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:13.315 "is_configured": true, 00:17:13.315 "data_offset": 2048, 00:17:13.315 "data_size": 63488 00:17:13.315 } 00:17:13.315 ] 00:17:13.315 }' 00:17:13.315 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.315 10:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.583 10:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:13.583 10:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.583 10:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.583 [2024-11-15 10:45:44.089713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.583 [2024-11-15 10:45:44.089964] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:13.583 [2024-11-15 10:45:44.089987] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:13.583 [2024-11-15 10:45:44.090038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.583 [2024-11-15 10:45:44.102927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:17:13.583 10:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.583 10:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:13.583 [2024-11-15 10:45:44.105243] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.079 "name": "raid_bdev1", 00:17:15.079 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:15.079 "strip_size_kb": 0, 00:17:15.079 "state": "online", 00:17:15.079 "raid_level": "raid1", 00:17:15.079 "superblock": true, 00:17:15.079 "num_base_bdevs": 4, 00:17:15.079 "num_base_bdevs_discovered": 3, 00:17:15.079 "num_base_bdevs_operational": 3, 00:17:15.079 "process": { 00:17:15.079 "type": "rebuild", 00:17:15.079 "target": "spare", 00:17:15.079 "progress": { 00:17:15.079 "blocks": 20480, 00:17:15.079 "percent": 32 00:17:15.079 } 00:17:15.079 }, 00:17:15.079 "base_bdevs_list": [ 00:17:15.079 { 00:17:15.079 "name": "spare", 00:17:15.079 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:15.079 "is_configured": true, 00:17:15.079 "data_offset": 2048, 00:17:15.079 "data_size": 63488 00:17:15.079 }, 00:17:15.079 { 00:17:15.079 "name": null, 00:17:15.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.079 "is_configured": false, 00:17:15.079 "data_offset": 2048, 00:17:15.079 "data_size": 63488 00:17:15.079 }, 00:17:15.079 { 00:17:15.079 "name": "BaseBdev3", 00:17:15.079 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:15.079 "is_configured": true, 00:17:15.079 "data_offset": 2048, 00:17:15.079 "data_size": 63488 00:17:15.079 }, 00:17:15.079 { 00:17:15.079 "name": "BaseBdev4", 00:17:15.079 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:15.079 "is_configured": true, 00:17:15.079 "data_offset": 2048, 00:17:15.079 "data_size": 63488 00:17:15.079 } 00:17:15.079 ] 00:17:15.079 }' 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.079 [2024-11-15 10:45:45.283225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.079 [2024-11-15 10:45:45.312072] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.079 [2024-11-15 10:45:45.312150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.079 [2024-11-15 10:45:45.312181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.079 [2024-11-15 10:45:45.312192] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.079 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.079 "name": "raid_bdev1", 00:17:15.079 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:15.079 "strip_size_kb": 0, 00:17:15.079 "state": "online", 00:17:15.079 "raid_level": "raid1", 00:17:15.079 "superblock": true, 00:17:15.079 "num_base_bdevs": 4, 00:17:15.079 "num_base_bdevs_discovered": 2, 00:17:15.079 "num_base_bdevs_operational": 2, 00:17:15.079 "base_bdevs_list": [ 00:17:15.079 { 00:17:15.079 "name": null, 00:17:15.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.079 "is_configured": false, 00:17:15.079 "data_offset": 0, 00:17:15.079 "data_size": 63488 00:17:15.079 }, 00:17:15.079 { 00:17:15.079 "name": null, 00:17:15.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.079 "is_configured": false, 00:17:15.079 "data_offset": 2048, 00:17:15.079 "data_size": 63488 00:17:15.079 }, 00:17:15.079 { 00:17:15.079 "name": "BaseBdev3", 00:17:15.079 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:15.079 "is_configured": true, 00:17:15.079 "data_offset": 2048, 00:17:15.080 "data_size": 63488 00:17:15.080 }, 00:17:15.080 { 00:17:15.080 "name": "BaseBdev4", 00:17:15.080 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:15.080 "is_configured": true, 00:17:15.080 "data_offset": 2048, 00:17:15.080 "data_size": 63488 00:17:15.080 } 00:17:15.080 ] 00:17:15.080 }' 00:17:15.080 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.080 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.338 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:15.338 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.338 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.338 [2024-11-15 10:45:45.821618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:15.338 [2024-11-15 10:45:45.821703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.338 [2024-11-15 10:45:45.821746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:15.338 [2024-11-15 10:45:45.821762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.338 [2024-11-15 10:45:45.822339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.338 [2024-11-15 10:45:45.822395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:15.338 [2024-11-15 10:45:45.822518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:15.338 [2024-11-15 10:45:45.822538] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:15.338 [2024-11-15 10:45:45.822557] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:15.338 [2024-11-15 10:45:45.822602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.338 [2024-11-15 10:45:45.835530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:17:15.338 spare 00:17:15.338 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.338 10:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:15.338 [2024-11-15 10:45:45.837824] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.716 "name": "raid_bdev1", 00:17:16.716 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:16.716 "strip_size_kb": 0, 00:17:16.716 "state": "online", 00:17:16.716 "raid_level": "raid1", 00:17:16.716 "superblock": true, 00:17:16.716 "num_base_bdevs": 4, 00:17:16.716 "num_base_bdevs_discovered": 3, 00:17:16.716 "num_base_bdevs_operational": 3, 00:17:16.716 "process": { 00:17:16.716 "type": "rebuild", 00:17:16.716 "target": "spare", 00:17:16.716 "progress": { 00:17:16.716 "blocks": 20480, 00:17:16.716 "percent": 32 00:17:16.716 } 00:17:16.716 }, 00:17:16.716 "base_bdevs_list": [ 00:17:16.716 { 00:17:16.716 "name": "spare", 00:17:16.716 "uuid": "9e481641-6f3c-5c09-b3e1-a02c3b133540", 00:17:16.716 "is_configured": true, 00:17:16.716 "data_offset": 2048, 00:17:16.716 "data_size": 63488 00:17:16.716 }, 00:17:16.716 { 00:17:16.716 "name": null, 00:17:16.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.716 "is_configured": false, 00:17:16.716 "data_offset": 2048, 00:17:16.716 "data_size": 63488 00:17:16.716 }, 00:17:16.716 { 00:17:16.716 "name": "BaseBdev3", 00:17:16.716 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:16.716 "is_configured": true, 00:17:16.716 "data_offset": 2048, 00:17:16.716 "data_size": 63488 00:17:16.716 }, 00:17:16.716 { 00:17:16.716 "name": "BaseBdev4", 00:17:16.716 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:16.716 "is_configured": true, 00:17:16.716 "data_offset": 2048, 00:17:16.716 "data_size": 63488 00:17:16.716 } 00:17:16.716 ] 00:17:16.716 }' 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.716 10:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.716 [2024-11-15 10:45:47.011514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.716 [2024-11-15 10:45:47.044293] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:16.716 [2024-11-15 10:45:47.044393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.716 [2024-11-15 10:45:47.044421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.716 [2024-11-15 10:45:47.044435] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.716 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.717 "name": "raid_bdev1", 00:17:16.717 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:16.717 "strip_size_kb": 0, 00:17:16.717 "state": "online", 00:17:16.717 "raid_level": "raid1", 00:17:16.717 "superblock": true, 00:17:16.717 "num_base_bdevs": 4, 00:17:16.717 "num_base_bdevs_discovered": 2, 00:17:16.717 "num_base_bdevs_operational": 2, 00:17:16.717 "base_bdevs_list": [ 00:17:16.717 { 00:17:16.717 "name": null, 00:17:16.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.717 "is_configured": false, 00:17:16.717 "data_offset": 0, 00:17:16.717 "data_size": 63488 00:17:16.717 }, 00:17:16.717 { 00:17:16.717 "name": null, 00:17:16.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.717 "is_configured": false, 00:17:16.717 "data_offset": 2048, 00:17:16.717 "data_size": 63488 00:17:16.717 }, 00:17:16.717 { 00:17:16.717 "name": "BaseBdev3", 00:17:16.717 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:16.717 "is_configured": true, 00:17:16.717 "data_offset": 2048, 00:17:16.717 "data_size": 63488 00:17:16.717 }, 00:17:16.717 { 00:17:16.717 "name": "BaseBdev4", 00:17:16.717 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:16.717 "is_configured": true, 00:17:16.717 "data_offset": 2048, 00:17:16.717 "data_size": 63488 00:17:16.717 } 00:17:16.717 ] 00:17:16.717 }' 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.717 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.284 "name": "raid_bdev1", 00:17:17.284 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:17.284 "strip_size_kb": 0, 00:17:17.284 "state": "online", 00:17:17.284 "raid_level": "raid1", 00:17:17.284 "superblock": true, 00:17:17.284 "num_base_bdevs": 4, 00:17:17.284 "num_base_bdevs_discovered": 2, 00:17:17.284 "num_base_bdevs_operational": 2, 00:17:17.284 "base_bdevs_list": [ 00:17:17.284 { 00:17:17.284 "name": null, 00:17:17.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.284 "is_configured": false, 00:17:17.284 "data_offset": 0, 00:17:17.284 "data_size": 63488 00:17:17.284 }, 00:17:17.284 { 00:17:17.284 "name": null, 00:17:17.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.284 "is_configured": false, 00:17:17.284 "data_offset": 2048, 00:17:17.284 "data_size": 63488 00:17:17.284 }, 00:17:17.284 { 00:17:17.284 "name": "BaseBdev3", 00:17:17.284 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:17.284 "is_configured": true, 00:17:17.284 "data_offset": 2048, 00:17:17.284 "data_size": 63488 00:17:17.284 }, 00:17:17.284 { 00:17:17.284 "name": "BaseBdev4", 00:17:17.284 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:17.284 "is_configured": true, 00:17:17.284 "data_offset": 2048, 00:17:17.284 "data_size": 63488 00:17:17.284 } 00:17:17.284 ] 00:17:17.284 }' 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 [2024-11-15 10:45:47.741780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:17.284 [2024-11-15 10:45:47.741996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.284 [2024-11-15 10:45:47.742036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:17:17.284 [2024-11-15 10:45:47.742054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.284 [2024-11-15 10:45:47.742653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.284 [2024-11-15 10:45:47.742701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:17.284 [2024-11-15 10:45:47.742802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:17.284 [2024-11-15 10:45:47.742828] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:17.284 [2024-11-15 10:45:47.742843] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:17.284 [2024-11-15 10:45:47.742858] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:17.284 BaseBdev1 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.284 10:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.221 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.480 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.480 "name": "raid_bdev1", 00:17:18.480 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:18.480 "strip_size_kb": 0, 00:17:18.480 "state": "online", 00:17:18.480 "raid_level": "raid1", 00:17:18.480 "superblock": true, 00:17:18.480 "num_base_bdevs": 4, 00:17:18.480 "num_base_bdevs_discovered": 2, 00:17:18.480 "num_base_bdevs_operational": 2, 00:17:18.480 "base_bdevs_list": [ 00:17:18.480 { 00:17:18.480 "name": null, 00:17:18.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.480 "is_configured": false, 00:17:18.480 "data_offset": 0, 00:17:18.480 "data_size": 63488 00:17:18.480 }, 00:17:18.480 { 00:17:18.480 "name": null, 00:17:18.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.480 "is_configured": false, 00:17:18.480 "data_offset": 2048, 00:17:18.480 "data_size": 63488 00:17:18.480 }, 00:17:18.480 { 00:17:18.480 "name": "BaseBdev3", 00:17:18.480 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:18.480 "is_configured": true, 00:17:18.480 "data_offset": 2048, 00:17:18.480 "data_size": 63488 00:17:18.480 }, 00:17:18.480 { 00:17:18.480 "name": "BaseBdev4", 00:17:18.480 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:18.480 "is_configured": true, 00:17:18.480 "data_offset": 2048, 00:17:18.480 "data_size": 63488 00:17:18.480 } 00:17:18.480 ] 00:17:18.480 }' 00:17:18.480 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.480 10:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.740 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.740 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.740 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.740 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.740 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.740 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.740 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.740 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.740 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.740 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.999 "name": "raid_bdev1", 00:17:18.999 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:18.999 "strip_size_kb": 0, 00:17:18.999 "state": "online", 00:17:18.999 "raid_level": "raid1", 00:17:18.999 "superblock": true, 00:17:18.999 "num_base_bdevs": 4, 00:17:18.999 "num_base_bdevs_discovered": 2, 00:17:18.999 "num_base_bdevs_operational": 2, 00:17:18.999 "base_bdevs_list": [ 00:17:18.999 { 00:17:18.999 "name": null, 00:17:18.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.999 "is_configured": false, 00:17:18.999 "data_offset": 0, 00:17:18.999 "data_size": 63488 00:17:18.999 }, 00:17:18.999 { 00:17:18.999 "name": null, 00:17:18.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.999 "is_configured": false, 00:17:18.999 "data_offset": 2048, 00:17:18.999 "data_size": 63488 00:17:18.999 }, 00:17:18.999 { 00:17:18.999 "name": "BaseBdev3", 00:17:18.999 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:18.999 "is_configured": true, 00:17:18.999 "data_offset": 2048, 00:17:18.999 "data_size": 63488 00:17:18.999 }, 00:17:18.999 { 00:17:18.999 "name": "BaseBdev4", 00:17:18.999 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:18.999 "is_configured": true, 00:17:18.999 "data_offset": 2048, 00:17:18.999 "data_size": 63488 00:17:18.999 } 00:17:18.999 ] 00:17:18.999 }' 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.999 [2024-11-15 10:45:49.418491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.999 [2024-11-15 10:45:49.418689] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:18.999 [2024-11-15 10:45:49.418710] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:18.999 request: 00:17:18.999 { 00:17:18.999 "base_bdev": "BaseBdev1", 00:17:18.999 "raid_bdev": "raid_bdev1", 00:17:18.999 "method": "bdev_raid_add_base_bdev", 00:17:18.999 "req_id": 1 00:17:18.999 } 00:17:18.999 Got JSON-RPC error response 00:17:18.999 response: 00:17:18.999 { 00:17:18.999 "code": -22, 00:17:18.999 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:18.999 } 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.999 10:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.934 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.934 "name": "raid_bdev1", 00:17:19.934 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:19.934 "strip_size_kb": 0, 00:17:19.934 "state": "online", 00:17:19.934 "raid_level": "raid1", 00:17:19.935 "superblock": true, 00:17:19.935 "num_base_bdevs": 4, 00:17:19.935 "num_base_bdevs_discovered": 2, 00:17:19.935 "num_base_bdevs_operational": 2, 00:17:19.935 "base_bdevs_list": [ 00:17:19.935 { 00:17:19.935 "name": null, 00:17:19.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.935 "is_configured": false, 00:17:19.935 "data_offset": 0, 00:17:19.935 "data_size": 63488 00:17:19.935 }, 00:17:19.935 { 00:17:19.935 "name": null, 00:17:19.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.935 "is_configured": false, 00:17:19.935 "data_offset": 2048, 00:17:19.935 "data_size": 63488 00:17:19.935 }, 00:17:19.935 { 00:17:19.935 "name": "BaseBdev3", 00:17:19.935 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:19.935 "is_configured": true, 00:17:19.935 "data_offset": 2048, 00:17:19.935 "data_size": 63488 00:17:19.935 }, 00:17:19.935 { 00:17:19.935 "name": "BaseBdev4", 00:17:19.935 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:19.935 "is_configured": true, 00:17:19.935 "data_offset": 2048, 00:17:19.935 "data_size": 63488 00:17:19.935 } 00:17:19.935 ] 00:17:19.935 }' 00:17:19.935 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.935 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.501 "name": "raid_bdev1", 00:17:20.501 "uuid": "fd3c5c00-b0e5-49be-9e18-fb35596787af", 00:17:20.501 "strip_size_kb": 0, 00:17:20.501 "state": "online", 00:17:20.501 "raid_level": "raid1", 00:17:20.501 "superblock": true, 00:17:20.501 "num_base_bdevs": 4, 00:17:20.501 "num_base_bdevs_discovered": 2, 00:17:20.501 "num_base_bdevs_operational": 2, 00:17:20.501 "base_bdevs_list": [ 00:17:20.501 { 00:17:20.501 "name": null, 00:17:20.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.501 "is_configured": false, 00:17:20.501 "data_offset": 0, 00:17:20.501 "data_size": 63488 00:17:20.501 }, 00:17:20.501 { 00:17:20.501 "name": null, 00:17:20.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.501 "is_configured": false, 00:17:20.501 "data_offset": 2048, 00:17:20.501 "data_size": 63488 00:17:20.501 }, 00:17:20.501 { 00:17:20.501 "name": "BaseBdev3", 00:17:20.501 "uuid": "41879572-9b87-5921-8cb9-c45f02363c72", 00:17:20.501 "is_configured": true, 00:17:20.501 "data_offset": 2048, 00:17:20.501 "data_size": 63488 00:17:20.501 }, 00:17:20.501 { 00:17:20.501 "name": "BaseBdev4", 00:17:20.501 "uuid": "9c6b9b02-bbbe-540e-aa02-2871b515dfae", 00:17:20.501 "is_configured": true, 00:17:20.501 "data_offset": 2048, 00:17:20.501 "data_size": 63488 00:17:20.501 } 00:17:20.501 ] 00:17:20.501 }' 00:17:20.501 10:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.501 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.501 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79630 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79630 ']' 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79630 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79630 00:17:20.759 killing process with pid 79630 00:17:20.759 Received shutdown signal, test time was about 20.255315 seconds 00:17:20.759 00:17:20.759 Latency(us) 00:17:20.759 [2024-11-15T10:45:51.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.759 [2024-11-15T10:45:51.319Z] =================================================================================================================== 00:17:20.759 [2024-11-15T10:45:51.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79630' 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79630 00:17:20.759 [2024-11-15 10:45:51.103649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.759 10:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79630 00:17:20.759 [2024-11-15 10:45:51.103809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.759 [2024-11-15 10:45:51.103905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.759 [2024-11-15 10:45:51.103921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:21.018 [2024-11-15 10:45:51.454684] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.952 10:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:21.952 00:17:21.952 real 0m23.818s 00:17:21.952 user 0m32.430s 00:17:21.952 sys 0m2.243s 00:17:21.952 10:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:21.952 10:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.952 ************************************ 00:17:21.952 END TEST raid_rebuild_test_sb_io 00:17:21.952 ************************************ 00:17:22.211 10:45:52 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:22.211 10:45:52 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:17:22.211 10:45:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:22.211 10:45:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:22.211 10:45:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.211 ************************************ 00:17:22.211 START TEST raid5f_state_function_test 00:17:22.211 ************************************ 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:22.211 Process raid pid: 80383 00:17:22.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80383 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80383' 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80383 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80383 ']' 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:22.211 10:45:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.211 [2024-11-15 10:45:52.666830] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:17:22.211 [2024-11-15 10:45:52.667222] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.470 [2024-11-15 10:45:52.853161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.470 [2024-11-15 10:45:52.982649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.729 [2024-11-15 10:45:53.204145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.729 [2024-11-15 10:45:53.204383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.295 [2024-11-15 10:45:53.697905] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.295 [2024-11-15 10:45:53.697976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.295 [2024-11-15 10:45:53.697994] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.295 [2024-11-15 10:45:53.698011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.295 [2024-11-15 10:45:53.698026] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.295 [2024-11-15 10:45:53.698040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.295 "name": "Existed_Raid", 00:17:23.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.295 "strip_size_kb": 64, 00:17:23.295 "state": "configuring", 00:17:23.295 "raid_level": "raid5f", 00:17:23.295 "superblock": false, 00:17:23.295 "num_base_bdevs": 3, 00:17:23.295 "num_base_bdevs_discovered": 0, 00:17:23.295 "num_base_bdevs_operational": 3, 00:17:23.295 "base_bdevs_list": [ 00:17:23.295 { 00:17:23.295 "name": "BaseBdev1", 00:17:23.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.295 "is_configured": false, 00:17:23.295 "data_offset": 0, 00:17:23.295 "data_size": 0 00:17:23.295 }, 00:17:23.295 { 00:17:23.295 "name": "BaseBdev2", 00:17:23.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.295 "is_configured": false, 00:17:23.295 "data_offset": 0, 00:17:23.295 "data_size": 0 00:17:23.295 }, 00:17:23.295 { 00:17:23.295 "name": "BaseBdev3", 00:17:23.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.295 "is_configured": false, 00:17:23.295 "data_offset": 0, 00:17:23.295 "data_size": 0 00:17:23.295 } 00:17:23.295 ] 00:17:23.295 }' 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.295 10:45:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.863 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:23.863 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.863 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.863 [2024-11-15 10:45:54.213962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.863 [2024-11-15 10:45:54.214006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:23.863 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.863 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:23.863 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.863 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.864 [2024-11-15 10:45:54.221952] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.864 [2024-11-15 10:45:54.222011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.864 [2024-11-15 10:45:54.222028] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.864 [2024-11-15 10:45:54.222044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.864 [2024-11-15 10:45:54.222053] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.864 [2024-11-15 10:45:54.222067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.864 [2024-11-15 10:45:54.262596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.864 BaseBdev1 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.864 [ 00:17:23.864 { 00:17:23.864 "name": "BaseBdev1", 00:17:23.864 "aliases": [ 00:17:23.864 "ad52ce47-ceaf-476f-948e-2dd74f9e6ff6" 00:17:23.864 ], 00:17:23.864 "product_name": "Malloc disk", 00:17:23.864 "block_size": 512, 00:17:23.864 "num_blocks": 65536, 00:17:23.864 "uuid": "ad52ce47-ceaf-476f-948e-2dd74f9e6ff6", 00:17:23.864 "assigned_rate_limits": { 00:17:23.864 "rw_ios_per_sec": 0, 00:17:23.864 "rw_mbytes_per_sec": 0, 00:17:23.864 "r_mbytes_per_sec": 0, 00:17:23.864 "w_mbytes_per_sec": 0 00:17:23.864 }, 00:17:23.864 "claimed": true, 00:17:23.864 "claim_type": "exclusive_write", 00:17:23.864 "zoned": false, 00:17:23.864 "supported_io_types": { 00:17:23.864 "read": true, 00:17:23.864 "write": true, 00:17:23.864 "unmap": true, 00:17:23.864 "flush": true, 00:17:23.864 "reset": true, 00:17:23.864 "nvme_admin": false, 00:17:23.864 "nvme_io": false, 00:17:23.864 "nvme_io_md": false, 00:17:23.864 "write_zeroes": true, 00:17:23.864 "zcopy": true, 00:17:23.864 "get_zone_info": false, 00:17:23.864 "zone_management": false, 00:17:23.864 "zone_append": false, 00:17:23.864 "compare": false, 00:17:23.864 "compare_and_write": false, 00:17:23.864 "abort": true, 00:17:23.864 "seek_hole": false, 00:17:23.864 "seek_data": false, 00:17:23.864 "copy": true, 00:17:23.864 "nvme_iov_md": false 00:17:23.864 }, 00:17:23.864 "memory_domains": [ 00:17:23.864 { 00:17:23.864 "dma_device_id": "system", 00:17:23.864 "dma_device_type": 1 00:17:23.864 }, 00:17:23.864 { 00:17:23.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.864 "dma_device_type": 2 00:17:23.864 } 00:17:23.864 ], 00:17:23.864 "driver_specific": {} 00:17:23.864 } 00:17:23.864 ] 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.864 "name": "Existed_Raid", 00:17:23.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.864 "strip_size_kb": 64, 00:17:23.864 "state": "configuring", 00:17:23.864 "raid_level": "raid5f", 00:17:23.864 "superblock": false, 00:17:23.864 "num_base_bdevs": 3, 00:17:23.864 "num_base_bdevs_discovered": 1, 00:17:23.864 "num_base_bdevs_operational": 3, 00:17:23.864 "base_bdevs_list": [ 00:17:23.864 { 00:17:23.864 "name": "BaseBdev1", 00:17:23.864 "uuid": "ad52ce47-ceaf-476f-948e-2dd74f9e6ff6", 00:17:23.864 "is_configured": true, 00:17:23.864 "data_offset": 0, 00:17:23.864 "data_size": 65536 00:17:23.864 }, 00:17:23.864 { 00:17:23.864 "name": "BaseBdev2", 00:17:23.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.864 "is_configured": false, 00:17:23.864 "data_offset": 0, 00:17:23.864 "data_size": 0 00:17:23.864 }, 00:17:23.864 { 00:17:23.864 "name": "BaseBdev3", 00:17:23.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.864 "is_configured": false, 00:17:23.864 "data_offset": 0, 00:17:23.864 "data_size": 0 00:17:23.864 } 00:17:23.864 ] 00:17:23.864 }' 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.864 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.431 [2024-11-15 10:45:54.806817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.431 [2024-11-15 10:45:54.806883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.431 [2024-11-15 10:45:54.814849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.431 [2024-11-15 10:45:54.817267] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.431 [2024-11-15 10:45:54.817327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.431 [2024-11-15 10:45:54.817364] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.431 [2024-11-15 10:45:54.817385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.431 "name": "Existed_Raid", 00:17:24.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.431 "strip_size_kb": 64, 00:17:24.431 "state": "configuring", 00:17:24.431 "raid_level": "raid5f", 00:17:24.431 "superblock": false, 00:17:24.431 "num_base_bdevs": 3, 00:17:24.431 "num_base_bdevs_discovered": 1, 00:17:24.431 "num_base_bdevs_operational": 3, 00:17:24.431 "base_bdevs_list": [ 00:17:24.431 { 00:17:24.431 "name": "BaseBdev1", 00:17:24.431 "uuid": "ad52ce47-ceaf-476f-948e-2dd74f9e6ff6", 00:17:24.431 "is_configured": true, 00:17:24.431 "data_offset": 0, 00:17:24.431 "data_size": 65536 00:17:24.431 }, 00:17:24.431 { 00:17:24.431 "name": "BaseBdev2", 00:17:24.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.431 "is_configured": false, 00:17:24.431 "data_offset": 0, 00:17:24.431 "data_size": 0 00:17:24.431 }, 00:17:24.431 { 00:17:24.431 "name": "BaseBdev3", 00:17:24.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.431 "is_configured": false, 00:17:24.431 "data_offset": 0, 00:17:24.431 "data_size": 0 00:17:24.431 } 00:17:24.431 ] 00:17:24.431 }' 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.431 10:45:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.997 [2024-11-15 10:45:55.377227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.997 BaseBdev2 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.997 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.997 [ 00:17:24.997 { 00:17:24.997 "name": "BaseBdev2", 00:17:24.997 "aliases": [ 00:17:24.997 "9dda6ccf-8753-48b7-9fb8-c6c6612806e2" 00:17:24.997 ], 00:17:24.997 "product_name": "Malloc disk", 00:17:24.997 "block_size": 512, 00:17:24.997 "num_blocks": 65536, 00:17:24.997 "uuid": "9dda6ccf-8753-48b7-9fb8-c6c6612806e2", 00:17:24.997 "assigned_rate_limits": { 00:17:24.997 "rw_ios_per_sec": 0, 00:17:24.997 "rw_mbytes_per_sec": 0, 00:17:24.997 "r_mbytes_per_sec": 0, 00:17:24.997 "w_mbytes_per_sec": 0 00:17:24.997 }, 00:17:24.997 "claimed": true, 00:17:24.997 "claim_type": "exclusive_write", 00:17:24.997 "zoned": false, 00:17:24.997 "supported_io_types": { 00:17:24.998 "read": true, 00:17:24.998 "write": true, 00:17:24.998 "unmap": true, 00:17:24.998 "flush": true, 00:17:24.998 "reset": true, 00:17:24.998 "nvme_admin": false, 00:17:24.998 "nvme_io": false, 00:17:24.998 "nvme_io_md": false, 00:17:24.998 "write_zeroes": true, 00:17:24.998 "zcopy": true, 00:17:24.998 "get_zone_info": false, 00:17:24.998 "zone_management": false, 00:17:24.998 "zone_append": false, 00:17:24.998 "compare": false, 00:17:24.998 "compare_and_write": false, 00:17:24.998 "abort": true, 00:17:24.998 "seek_hole": false, 00:17:24.998 "seek_data": false, 00:17:24.998 "copy": true, 00:17:24.998 "nvme_iov_md": false 00:17:24.998 }, 00:17:24.998 "memory_domains": [ 00:17:24.998 { 00:17:24.998 "dma_device_id": "system", 00:17:24.998 "dma_device_type": 1 00:17:24.998 }, 00:17:24.998 { 00:17:24.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.998 "dma_device_type": 2 00:17:24.998 } 00:17:24.998 ], 00:17:24.998 "driver_specific": {} 00:17:24.998 } 00:17:24.998 ] 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.998 "name": "Existed_Raid", 00:17:24.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.998 "strip_size_kb": 64, 00:17:24.998 "state": "configuring", 00:17:24.998 "raid_level": "raid5f", 00:17:24.998 "superblock": false, 00:17:24.998 "num_base_bdevs": 3, 00:17:24.998 "num_base_bdevs_discovered": 2, 00:17:24.998 "num_base_bdevs_operational": 3, 00:17:24.998 "base_bdevs_list": [ 00:17:24.998 { 00:17:24.998 "name": "BaseBdev1", 00:17:24.998 "uuid": "ad52ce47-ceaf-476f-948e-2dd74f9e6ff6", 00:17:24.998 "is_configured": true, 00:17:24.998 "data_offset": 0, 00:17:24.998 "data_size": 65536 00:17:24.998 }, 00:17:24.998 { 00:17:24.998 "name": "BaseBdev2", 00:17:24.998 "uuid": "9dda6ccf-8753-48b7-9fb8-c6c6612806e2", 00:17:24.998 "is_configured": true, 00:17:24.998 "data_offset": 0, 00:17:24.998 "data_size": 65536 00:17:24.998 }, 00:17:24.998 { 00:17:24.998 "name": "BaseBdev3", 00:17:24.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.998 "is_configured": false, 00:17:24.998 "data_offset": 0, 00:17:24.998 "data_size": 0 00:17:24.998 } 00:17:24.998 ] 00:17:24.998 }' 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.998 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.588 [2024-11-15 10:45:55.979952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.588 [2024-11-15 10:45:55.980046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:25.588 [2024-11-15 10:45:55.980073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:25.588 [2024-11-15 10:45:55.980431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:25.588 [2024-11-15 10:45:55.985870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:25.588 BaseBdev3 00:17:25.588 [2024-11-15 10:45:55.986031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:25.588 [2024-11-15 10:45:55.986450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.588 10:45:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.588 [ 00:17:25.588 { 00:17:25.588 "name": "BaseBdev3", 00:17:25.588 "aliases": [ 00:17:25.588 "9f418467-14b2-4d65-9d77-c810d8d6940f" 00:17:25.588 ], 00:17:25.588 "product_name": "Malloc disk", 00:17:25.588 "block_size": 512, 00:17:25.588 "num_blocks": 65536, 00:17:25.588 "uuid": "9f418467-14b2-4d65-9d77-c810d8d6940f", 00:17:25.588 "assigned_rate_limits": { 00:17:25.588 "rw_ios_per_sec": 0, 00:17:25.588 "rw_mbytes_per_sec": 0, 00:17:25.588 "r_mbytes_per_sec": 0, 00:17:25.588 "w_mbytes_per_sec": 0 00:17:25.588 }, 00:17:25.588 "claimed": true, 00:17:25.588 "claim_type": "exclusive_write", 00:17:25.588 "zoned": false, 00:17:25.588 "supported_io_types": { 00:17:25.588 "read": true, 00:17:25.588 "write": true, 00:17:25.588 "unmap": true, 00:17:25.588 "flush": true, 00:17:25.588 "reset": true, 00:17:25.588 "nvme_admin": false, 00:17:25.588 "nvme_io": false, 00:17:25.588 "nvme_io_md": false, 00:17:25.588 "write_zeroes": true, 00:17:25.588 "zcopy": true, 00:17:25.588 "get_zone_info": false, 00:17:25.588 "zone_management": false, 00:17:25.588 "zone_append": false, 00:17:25.588 "compare": false, 00:17:25.588 "compare_and_write": false, 00:17:25.588 "abort": true, 00:17:25.588 "seek_hole": false, 00:17:25.588 "seek_data": false, 00:17:25.588 "copy": true, 00:17:25.588 "nvme_iov_md": false 00:17:25.588 }, 00:17:25.588 "memory_domains": [ 00:17:25.588 { 00:17:25.588 "dma_device_id": "system", 00:17:25.588 "dma_device_type": 1 00:17:25.588 }, 00:17:25.588 { 00:17:25.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.588 "dma_device_type": 2 00:17:25.588 } 00:17:25.588 ], 00:17:25.588 "driver_specific": {} 00:17:25.588 } 00:17:25.588 ] 00:17:25.588 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.588 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:25.588 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:25.588 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.589 "name": "Existed_Raid", 00:17:25.589 "uuid": "4dee8244-7516-4b41-b5d5-c60e9a199121", 00:17:25.589 "strip_size_kb": 64, 00:17:25.589 "state": "online", 00:17:25.589 "raid_level": "raid5f", 00:17:25.589 "superblock": false, 00:17:25.589 "num_base_bdevs": 3, 00:17:25.589 "num_base_bdevs_discovered": 3, 00:17:25.589 "num_base_bdevs_operational": 3, 00:17:25.589 "base_bdevs_list": [ 00:17:25.589 { 00:17:25.589 "name": "BaseBdev1", 00:17:25.589 "uuid": "ad52ce47-ceaf-476f-948e-2dd74f9e6ff6", 00:17:25.589 "is_configured": true, 00:17:25.589 "data_offset": 0, 00:17:25.589 "data_size": 65536 00:17:25.589 }, 00:17:25.589 { 00:17:25.589 "name": "BaseBdev2", 00:17:25.589 "uuid": "9dda6ccf-8753-48b7-9fb8-c6c6612806e2", 00:17:25.589 "is_configured": true, 00:17:25.589 "data_offset": 0, 00:17:25.589 "data_size": 65536 00:17:25.589 }, 00:17:25.589 { 00:17:25.589 "name": "BaseBdev3", 00:17:25.589 "uuid": "9f418467-14b2-4d65-9d77-c810d8d6940f", 00:17:25.589 "is_configured": true, 00:17:25.589 "data_offset": 0, 00:17:25.589 "data_size": 65536 00:17:25.589 } 00:17:25.589 ] 00:17:25.589 }' 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.589 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.156 [2024-11-15 10:45:56.572070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:26.156 "name": "Existed_Raid", 00:17:26.156 "aliases": [ 00:17:26.156 "4dee8244-7516-4b41-b5d5-c60e9a199121" 00:17:26.156 ], 00:17:26.156 "product_name": "Raid Volume", 00:17:26.156 "block_size": 512, 00:17:26.156 "num_blocks": 131072, 00:17:26.156 "uuid": "4dee8244-7516-4b41-b5d5-c60e9a199121", 00:17:26.156 "assigned_rate_limits": { 00:17:26.156 "rw_ios_per_sec": 0, 00:17:26.156 "rw_mbytes_per_sec": 0, 00:17:26.156 "r_mbytes_per_sec": 0, 00:17:26.156 "w_mbytes_per_sec": 0 00:17:26.156 }, 00:17:26.156 "claimed": false, 00:17:26.156 "zoned": false, 00:17:26.156 "supported_io_types": { 00:17:26.156 "read": true, 00:17:26.156 "write": true, 00:17:26.156 "unmap": false, 00:17:26.156 "flush": false, 00:17:26.156 "reset": true, 00:17:26.156 "nvme_admin": false, 00:17:26.156 "nvme_io": false, 00:17:26.156 "nvme_io_md": false, 00:17:26.156 "write_zeroes": true, 00:17:26.156 "zcopy": false, 00:17:26.156 "get_zone_info": false, 00:17:26.156 "zone_management": false, 00:17:26.156 "zone_append": false, 00:17:26.156 "compare": false, 00:17:26.156 "compare_and_write": false, 00:17:26.156 "abort": false, 00:17:26.156 "seek_hole": false, 00:17:26.156 "seek_data": false, 00:17:26.156 "copy": false, 00:17:26.156 "nvme_iov_md": false 00:17:26.156 }, 00:17:26.156 "driver_specific": { 00:17:26.156 "raid": { 00:17:26.156 "uuid": "4dee8244-7516-4b41-b5d5-c60e9a199121", 00:17:26.156 "strip_size_kb": 64, 00:17:26.156 "state": "online", 00:17:26.156 "raid_level": "raid5f", 00:17:26.156 "superblock": false, 00:17:26.156 "num_base_bdevs": 3, 00:17:26.156 "num_base_bdevs_discovered": 3, 00:17:26.156 "num_base_bdevs_operational": 3, 00:17:26.156 "base_bdevs_list": [ 00:17:26.156 { 00:17:26.156 "name": "BaseBdev1", 00:17:26.156 "uuid": "ad52ce47-ceaf-476f-948e-2dd74f9e6ff6", 00:17:26.156 "is_configured": true, 00:17:26.156 "data_offset": 0, 00:17:26.156 "data_size": 65536 00:17:26.156 }, 00:17:26.156 { 00:17:26.156 "name": "BaseBdev2", 00:17:26.156 "uuid": "9dda6ccf-8753-48b7-9fb8-c6c6612806e2", 00:17:26.156 "is_configured": true, 00:17:26.156 "data_offset": 0, 00:17:26.156 "data_size": 65536 00:17:26.156 }, 00:17:26.156 { 00:17:26.156 "name": "BaseBdev3", 00:17:26.156 "uuid": "9f418467-14b2-4d65-9d77-c810d8d6940f", 00:17:26.156 "is_configured": true, 00:17:26.156 "data_offset": 0, 00:17:26.156 "data_size": 65536 00:17:26.156 } 00:17:26.156 ] 00:17:26.156 } 00:17:26.156 } 00:17:26.156 }' 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:26.156 BaseBdev2 00:17:26.156 BaseBdev3' 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:26.156 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.415 [2024-11-15 10:45:56.871994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.415 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.674 10:45:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.674 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.674 "name": "Existed_Raid", 00:17:26.674 "uuid": "4dee8244-7516-4b41-b5d5-c60e9a199121", 00:17:26.674 "strip_size_kb": 64, 00:17:26.674 "state": "online", 00:17:26.674 "raid_level": "raid5f", 00:17:26.674 "superblock": false, 00:17:26.674 "num_base_bdevs": 3, 00:17:26.674 "num_base_bdevs_discovered": 2, 00:17:26.674 "num_base_bdevs_operational": 2, 00:17:26.674 "base_bdevs_list": [ 00:17:26.674 { 00:17:26.674 "name": null, 00:17:26.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.674 "is_configured": false, 00:17:26.674 "data_offset": 0, 00:17:26.674 "data_size": 65536 00:17:26.674 }, 00:17:26.674 { 00:17:26.674 "name": "BaseBdev2", 00:17:26.674 "uuid": "9dda6ccf-8753-48b7-9fb8-c6c6612806e2", 00:17:26.674 "is_configured": true, 00:17:26.674 "data_offset": 0, 00:17:26.674 "data_size": 65536 00:17:26.674 }, 00:17:26.674 { 00:17:26.674 "name": "BaseBdev3", 00:17:26.674 "uuid": "9f418467-14b2-4d65-9d77-c810d8d6940f", 00:17:26.674 "is_configured": true, 00:17:26.674 "data_offset": 0, 00:17:26.674 "data_size": 65536 00:17:26.674 } 00:17:26.674 ] 00:17:26.674 }' 00:17:26.674 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.674 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.934 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:26.934 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:26.934 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.934 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:26.934 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.934 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 [2024-11-15 10:45:57.539247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.193 [2024-11-15 10:45:57.539387] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.193 [2024-11-15 10:45:57.620287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.193 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.193 [2024-11-15 10:45:57.680413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:27.193 [2024-11-15 10:45:57.680476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 BaseBdev2 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.452 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 [ 00:17:27.452 { 00:17:27.452 "name": "BaseBdev2", 00:17:27.452 "aliases": [ 00:17:27.452 "62e534a1-ace0-4d28-bb0f-28cb092006a9" 00:17:27.452 ], 00:17:27.452 "product_name": "Malloc disk", 00:17:27.453 "block_size": 512, 00:17:27.453 "num_blocks": 65536, 00:17:27.453 "uuid": "62e534a1-ace0-4d28-bb0f-28cb092006a9", 00:17:27.453 "assigned_rate_limits": { 00:17:27.453 "rw_ios_per_sec": 0, 00:17:27.453 "rw_mbytes_per_sec": 0, 00:17:27.453 "r_mbytes_per_sec": 0, 00:17:27.453 "w_mbytes_per_sec": 0 00:17:27.453 }, 00:17:27.453 "claimed": false, 00:17:27.453 "zoned": false, 00:17:27.453 "supported_io_types": { 00:17:27.453 "read": true, 00:17:27.453 "write": true, 00:17:27.453 "unmap": true, 00:17:27.453 "flush": true, 00:17:27.453 "reset": true, 00:17:27.453 "nvme_admin": false, 00:17:27.453 "nvme_io": false, 00:17:27.453 "nvme_io_md": false, 00:17:27.453 "write_zeroes": true, 00:17:27.453 "zcopy": true, 00:17:27.453 "get_zone_info": false, 00:17:27.453 "zone_management": false, 00:17:27.453 "zone_append": false, 00:17:27.453 "compare": false, 00:17:27.453 "compare_and_write": false, 00:17:27.453 "abort": true, 00:17:27.453 "seek_hole": false, 00:17:27.453 "seek_data": false, 00:17:27.453 "copy": true, 00:17:27.453 "nvme_iov_md": false 00:17:27.453 }, 00:17:27.453 "memory_domains": [ 00:17:27.453 { 00:17:27.453 "dma_device_id": "system", 00:17:27.453 "dma_device_type": 1 00:17:27.453 }, 00:17:27.453 { 00:17:27.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.453 "dma_device_type": 2 00:17:27.453 } 00:17:27.453 ], 00:17:27.453 "driver_specific": {} 00:17:27.453 } 00:17:27.453 ] 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 BaseBdev3 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 [ 00:17:27.453 { 00:17:27.453 "name": "BaseBdev3", 00:17:27.453 "aliases": [ 00:17:27.453 "a83665a7-81d6-4c98-acfe-e9e91115a0ef" 00:17:27.453 ], 00:17:27.453 "product_name": "Malloc disk", 00:17:27.453 "block_size": 512, 00:17:27.453 "num_blocks": 65536, 00:17:27.453 "uuid": "a83665a7-81d6-4c98-acfe-e9e91115a0ef", 00:17:27.453 "assigned_rate_limits": { 00:17:27.453 "rw_ios_per_sec": 0, 00:17:27.453 "rw_mbytes_per_sec": 0, 00:17:27.453 "r_mbytes_per_sec": 0, 00:17:27.453 "w_mbytes_per_sec": 0 00:17:27.453 }, 00:17:27.453 "claimed": false, 00:17:27.453 "zoned": false, 00:17:27.453 "supported_io_types": { 00:17:27.453 "read": true, 00:17:27.453 "write": true, 00:17:27.453 "unmap": true, 00:17:27.453 "flush": true, 00:17:27.453 "reset": true, 00:17:27.453 "nvme_admin": false, 00:17:27.453 "nvme_io": false, 00:17:27.453 "nvme_io_md": false, 00:17:27.453 "write_zeroes": true, 00:17:27.453 "zcopy": true, 00:17:27.453 "get_zone_info": false, 00:17:27.453 "zone_management": false, 00:17:27.453 "zone_append": false, 00:17:27.453 "compare": false, 00:17:27.453 "compare_and_write": false, 00:17:27.453 "abort": true, 00:17:27.453 "seek_hole": false, 00:17:27.453 "seek_data": false, 00:17:27.453 "copy": true, 00:17:27.453 "nvme_iov_md": false 00:17:27.453 }, 00:17:27.453 "memory_domains": [ 00:17:27.453 { 00:17:27.453 "dma_device_id": "system", 00:17:27.453 "dma_device_type": 1 00:17:27.453 }, 00:17:27.453 { 00:17:27.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.453 "dma_device_type": 2 00:17:27.453 } 00:17:27.453 ], 00:17:27.453 "driver_specific": {} 00:17:27.453 } 00:17:27.453 ] 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 [2024-11-15 10:45:57.953252] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.453 [2024-11-15 10:45:57.953314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.453 [2024-11-15 10:45:57.953368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.453 [2024-11-15 10:45:57.955639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.453 10:45:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.712 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.712 "name": "Existed_Raid", 00:17:27.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.712 "strip_size_kb": 64, 00:17:27.712 "state": "configuring", 00:17:27.712 "raid_level": "raid5f", 00:17:27.712 "superblock": false, 00:17:27.712 "num_base_bdevs": 3, 00:17:27.712 "num_base_bdevs_discovered": 2, 00:17:27.712 "num_base_bdevs_operational": 3, 00:17:27.712 "base_bdevs_list": [ 00:17:27.712 { 00:17:27.712 "name": "BaseBdev1", 00:17:27.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.713 "is_configured": false, 00:17:27.713 "data_offset": 0, 00:17:27.713 "data_size": 0 00:17:27.713 }, 00:17:27.713 { 00:17:27.713 "name": "BaseBdev2", 00:17:27.713 "uuid": "62e534a1-ace0-4d28-bb0f-28cb092006a9", 00:17:27.713 "is_configured": true, 00:17:27.713 "data_offset": 0, 00:17:27.713 "data_size": 65536 00:17:27.713 }, 00:17:27.713 { 00:17:27.713 "name": "BaseBdev3", 00:17:27.713 "uuid": "a83665a7-81d6-4c98-acfe-e9e91115a0ef", 00:17:27.713 "is_configured": true, 00:17:27.713 "data_offset": 0, 00:17:27.713 "data_size": 65536 00:17:27.713 } 00:17:27.713 ] 00:17:27.713 }' 00:17:27.713 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.713 10:45:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.971 [2024-11-15 10:45:58.501431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.971 10:45:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.230 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.230 "name": "Existed_Raid", 00:17:28.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.230 "strip_size_kb": 64, 00:17:28.230 "state": "configuring", 00:17:28.230 "raid_level": "raid5f", 00:17:28.230 "superblock": false, 00:17:28.230 "num_base_bdevs": 3, 00:17:28.230 "num_base_bdevs_discovered": 1, 00:17:28.230 "num_base_bdevs_operational": 3, 00:17:28.230 "base_bdevs_list": [ 00:17:28.230 { 00:17:28.230 "name": "BaseBdev1", 00:17:28.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.230 "is_configured": false, 00:17:28.230 "data_offset": 0, 00:17:28.230 "data_size": 0 00:17:28.230 }, 00:17:28.230 { 00:17:28.230 "name": null, 00:17:28.230 "uuid": "62e534a1-ace0-4d28-bb0f-28cb092006a9", 00:17:28.230 "is_configured": false, 00:17:28.230 "data_offset": 0, 00:17:28.230 "data_size": 65536 00:17:28.230 }, 00:17:28.230 { 00:17:28.230 "name": "BaseBdev3", 00:17:28.230 "uuid": "a83665a7-81d6-4c98-acfe-e9e91115a0ef", 00:17:28.230 "is_configured": true, 00:17:28.230 "data_offset": 0, 00:17:28.230 "data_size": 65536 00:17:28.230 } 00:17:28.230 ] 00:17:28.230 }' 00:17:28.230 10:45:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.230 10:45:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.489 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.489 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:28.489 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.489 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.489 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.748 [2024-11-15 10:45:59.099293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.748 BaseBdev1 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.748 [ 00:17:28.748 { 00:17:28.748 "name": "BaseBdev1", 00:17:28.748 "aliases": [ 00:17:28.748 "61057788-9540-4881-ab6c-6d3a15f5a3ee" 00:17:28.748 ], 00:17:28.748 "product_name": "Malloc disk", 00:17:28.748 "block_size": 512, 00:17:28.748 "num_blocks": 65536, 00:17:28.748 "uuid": "61057788-9540-4881-ab6c-6d3a15f5a3ee", 00:17:28.748 "assigned_rate_limits": { 00:17:28.748 "rw_ios_per_sec": 0, 00:17:28.748 "rw_mbytes_per_sec": 0, 00:17:28.748 "r_mbytes_per_sec": 0, 00:17:28.748 "w_mbytes_per_sec": 0 00:17:28.748 }, 00:17:28.748 "claimed": true, 00:17:28.748 "claim_type": "exclusive_write", 00:17:28.748 "zoned": false, 00:17:28.748 "supported_io_types": { 00:17:28.748 "read": true, 00:17:28.748 "write": true, 00:17:28.748 "unmap": true, 00:17:28.748 "flush": true, 00:17:28.748 "reset": true, 00:17:28.748 "nvme_admin": false, 00:17:28.748 "nvme_io": false, 00:17:28.748 "nvme_io_md": false, 00:17:28.748 "write_zeroes": true, 00:17:28.748 "zcopy": true, 00:17:28.748 "get_zone_info": false, 00:17:28.748 "zone_management": false, 00:17:28.748 "zone_append": false, 00:17:28.748 "compare": false, 00:17:28.748 "compare_and_write": false, 00:17:28.748 "abort": true, 00:17:28.748 "seek_hole": false, 00:17:28.748 "seek_data": false, 00:17:28.748 "copy": true, 00:17:28.748 "nvme_iov_md": false 00:17:28.748 }, 00:17:28.748 "memory_domains": [ 00:17:28.748 { 00:17:28.748 "dma_device_id": "system", 00:17:28.748 "dma_device_type": 1 00:17:28.748 }, 00:17:28.748 { 00:17:28.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.748 "dma_device_type": 2 00:17:28.748 } 00:17:28.748 ], 00:17:28.748 "driver_specific": {} 00:17:28.748 } 00:17:28.748 ] 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.748 "name": "Existed_Raid", 00:17:28.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.748 "strip_size_kb": 64, 00:17:28.748 "state": "configuring", 00:17:28.748 "raid_level": "raid5f", 00:17:28.748 "superblock": false, 00:17:28.748 "num_base_bdevs": 3, 00:17:28.748 "num_base_bdevs_discovered": 2, 00:17:28.748 "num_base_bdevs_operational": 3, 00:17:28.748 "base_bdevs_list": [ 00:17:28.748 { 00:17:28.748 "name": "BaseBdev1", 00:17:28.748 "uuid": "61057788-9540-4881-ab6c-6d3a15f5a3ee", 00:17:28.748 "is_configured": true, 00:17:28.748 "data_offset": 0, 00:17:28.748 "data_size": 65536 00:17:28.748 }, 00:17:28.748 { 00:17:28.748 "name": null, 00:17:28.748 "uuid": "62e534a1-ace0-4d28-bb0f-28cb092006a9", 00:17:28.748 "is_configured": false, 00:17:28.748 "data_offset": 0, 00:17:28.748 "data_size": 65536 00:17:28.748 }, 00:17:28.748 { 00:17:28.748 "name": "BaseBdev3", 00:17:28.748 "uuid": "a83665a7-81d6-4c98-acfe-e9e91115a0ef", 00:17:28.748 "is_configured": true, 00:17:28.748 "data_offset": 0, 00:17:28.748 "data_size": 65536 00:17:28.748 } 00:17:28.748 ] 00:17:28.748 }' 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.748 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.315 [2024-11-15 10:45:59.683561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.315 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.316 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.316 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.316 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.316 "name": "Existed_Raid", 00:17:29.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.316 "strip_size_kb": 64, 00:17:29.316 "state": "configuring", 00:17:29.316 "raid_level": "raid5f", 00:17:29.316 "superblock": false, 00:17:29.316 "num_base_bdevs": 3, 00:17:29.316 "num_base_bdevs_discovered": 1, 00:17:29.316 "num_base_bdevs_operational": 3, 00:17:29.316 "base_bdevs_list": [ 00:17:29.316 { 00:17:29.316 "name": "BaseBdev1", 00:17:29.316 "uuid": "61057788-9540-4881-ab6c-6d3a15f5a3ee", 00:17:29.316 "is_configured": true, 00:17:29.316 "data_offset": 0, 00:17:29.316 "data_size": 65536 00:17:29.316 }, 00:17:29.316 { 00:17:29.316 "name": null, 00:17:29.316 "uuid": "62e534a1-ace0-4d28-bb0f-28cb092006a9", 00:17:29.316 "is_configured": false, 00:17:29.316 "data_offset": 0, 00:17:29.316 "data_size": 65536 00:17:29.316 }, 00:17:29.316 { 00:17:29.316 "name": null, 00:17:29.316 "uuid": "a83665a7-81d6-4c98-acfe-e9e91115a0ef", 00:17:29.316 "is_configured": false, 00:17:29.316 "data_offset": 0, 00:17:29.316 "data_size": 65536 00:17:29.316 } 00:17:29.316 ] 00:17:29.316 }' 00:17:29.316 10:45:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.316 10:45:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.882 [2024-11-15 10:46:00.255697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.882 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.883 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.883 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.883 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.883 "name": "Existed_Raid", 00:17:29.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.883 "strip_size_kb": 64, 00:17:29.883 "state": "configuring", 00:17:29.883 "raid_level": "raid5f", 00:17:29.883 "superblock": false, 00:17:29.883 "num_base_bdevs": 3, 00:17:29.883 "num_base_bdevs_discovered": 2, 00:17:29.883 "num_base_bdevs_operational": 3, 00:17:29.883 "base_bdevs_list": [ 00:17:29.883 { 00:17:29.883 "name": "BaseBdev1", 00:17:29.883 "uuid": "61057788-9540-4881-ab6c-6d3a15f5a3ee", 00:17:29.883 "is_configured": true, 00:17:29.883 "data_offset": 0, 00:17:29.883 "data_size": 65536 00:17:29.883 }, 00:17:29.883 { 00:17:29.883 "name": null, 00:17:29.883 "uuid": "62e534a1-ace0-4d28-bb0f-28cb092006a9", 00:17:29.883 "is_configured": false, 00:17:29.883 "data_offset": 0, 00:17:29.883 "data_size": 65536 00:17:29.883 }, 00:17:29.883 { 00:17:29.883 "name": "BaseBdev3", 00:17:29.883 "uuid": "a83665a7-81d6-4c98-acfe-e9e91115a0ef", 00:17:29.883 "is_configured": true, 00:17:29.883 "data_offset": 0, 00:17:29.883 "data_size": 65536 00:17:29.883 } 00:17:29.883 ] 00:17:29.883 }' 00:17:29.883 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.883 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.450 [2024-11-15 10:46:00.831880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.450 "name": "Existed_Raid", 00:17:30.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.450 "strip_size_kb": 64, 00:17:30.450 "state": "configuring", 00:17:30.450 "raid_level": "raid5f", 00:17:30.450 "superblock": false, 00:17:30.450 "num_base_bdevs": 3, 00:17:30.450 "num_base_bdevs_discovered": 1, 00:17:30.450 "num_base_bdevs_operational": 3, 00:17:30.450 "base_bdevs_list": [ 00:17:30.450 { 00:17:30.450 "name": null, 00:17:30.450 "uuid": "61057788-9540-4881-ab6c-6d3a15f5a3ee", 00:17:30.450 "is_configured": false, 00:17:30.450 "data_offset": 0, 00:17:30.450 "data_size": 65536 00:17:30.450 }, 00:17:30.450 { 00:17:30.450 "name": null, 00:17:30.450 "uuid": "62e534a1-ace0-4d28-bb0f-28cb092006a9", 00:17:30.450 "is_configured": false, 00:17:30.450 "data_offset": 0, 00:17:30.450 "data_size": 65536 00:17:30.450 }, 00:17:30.450 { 00:17:30.450 "name": "BaseBdev3", 00:17:30.450 "uuid": "a83665a7-81d6-4c98-acfe-e9e91115a0ef", 00:17:30.450 "is_configured": true, 00:17:30.450 "data_offset": 0, 00:17:30.450 "data_size": 65536 00:17:30.450 } 00:17:30.450 ] 00:17:30.450 }' 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.450 10:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.015 [2024-11-15 10:46:01.500687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.015 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.015 "name": "Existed_Raid", 00:17:31.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.016 "strip_size_kb": 64, 00:17:31.016 "state": "configuring", 00:17:31.016 "raid_level": "raid5f", 00:17:31.016 "superblock": false, 00:17:31.016 "num_base_bdevs": 3, 00:17:31.016 "num_base_bdevs_discovered": 2, 00:17:31.016 "num_base_bdevs_operational": 3, 00:17:31.016 "base_bdevs_list": [ 00:17:31.016 { 00:17:31.016 "name": null, 00:17:31.016 "uuid": "61057788-9540-4881-ab6c-6d3a15f5a3ee", 00:17:31.016 "is_configured": false, 00:17:31.016 "data_offset": 0, 00:17:31.016 "data_size": 65536 00:17:31.016 }, 00:17:31.016 { 00:17:31.016 "name": "BaseBdev2", 00:17:31.016 "uuid": "62e534a1-ace0-4d28-bb0f-28cb092006a9", 00:17:31.016 "is_configured": true, 00:17:31.016 "data_offset": 0, 00:17:31.016 "data_size": 65536 00:17:31.016 }, 00:17:31.016 { 00:17:31.016 "name": "BaseBdev3", 00:17:31.016 "uuid": "a83665a7-81d6-4c98-acfe-e9e91115a0ef", 00:17:31.016 "is_configured": true, 00:17:31.016 "data_offset": 0, 00:17:31.016 "data_size": 65536 00:17:31.016 } 00:17:31.016 ] 00:17:31.016 }' 00:17:31.016 10:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.016 10:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 61057788-9540-4881-ab6c-6d3a15f5a3ee 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.582 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.842 [2024-11-15 10:46:02.151235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:31.842 [2024-11-15 10:46:02.151476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:31.842 [2024-11-15 10:46:02.151510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:31.842 [2024-11-15 10:46:02.151833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:31.842 [2024-11-15 10:46:02.156873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:31.842 [2024-11-15 10:46:02.157032] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:31.842 [2024-11-15 10:46:02.157498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.842 NewBaseBdev 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.842 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.843 [ 00:17:31.843 { 00:17:31.843 "name": "NewBaseBdev", 00:17:31.843 "aliases": [ 00:17:31.843 "61057788-9540-4881-ab6c-6d3a15f5a3ee" 00:17:31.843 ], 00:17:31.843 "product_name": "Malloc disk", 00:17:31.843 "block_size": 512, 00:17:31.843 "num_blocks": 65536, 00:17:31.843 "uuid": "61057788-9540-4881-ab6c-6d3a15f5a3ee", 00:17:31.843 "assigned_rate_limits": { 00:17:31.843 "rw_ios_per_sec": 0, 00:17:31.843 "rw_mbytes_per_sec": 0, 00:17:31.843 "r_mbytes_per_sec": 0, 00:17:31.843 "w_mbytes_per_sec": 0 00:17:31.843 }, 00:17:31.843 "claimed": true, 00:17:31.843 "claim_type": "exclusive_write", 00:17:31.843 "zoned": false, 00:17:31.843 "supported_io_types": { 00:17:31.843 "read": true, 00:17:31.843 "write": true, 00:17:31.843 "unmap": true, 00:17:31.843 "flush": true, 00:17:31.843 "reset": true, 00:17:31.843 "nvme_admin": false, 00:17:31.843 "nvme_io": false, 00:17:31.843 "nvme_io_md": false, 00:17:31.843 "write_zeroes": true, 00:17:31.843 "zcopy": true, 00:17:31.843 "get_zone_info": false, 00:17:31.843 "zone_management": false, 00:17:31.843 "zone_append": false, 00:17:31.843 "compare": false, 00:17:31.843 "compare_and_write": false, 00:17:31.843 "abort": true, 00:17:31.843 "seek_hole": false, 00:17:31.843 "seek_data": false, 00:17:31.843 "copy": true, 00:17:31.843 "nvme_iov_md": false 00:17:31.843 }, 00:17:31.843 "memory_domains": [ 00:17:31.843 { 00:17:31.843 "dma_device_id": "system", 00:17:31.843 "dma_device_type": 1 00:17:31.843 }, 00:17:31.843 { 00:17:31.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.843 "dma_device_type": 2 00:17:31.843 } 00:17:31.843 ], 00:17:31.843 "driver_specific": {} 00:17:31.843 } 00:17:31.843 ] 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.843 "name": "Existed_Raid", 00:17:31.843 "uuid": "2d5e7d71-ce2f-4894-8d50-7a6d67872a76", 00:17:31.843 "strip_size_kb": 64, 00:17:31.843 "state": "online", 00:17:31.843 "raid_level": "raid5f", 00:17:31.843 "superblock": false, 00:17:31.843 "num_base_bdevs": 3, 00:17:31.843 "num_base_bdevs_discovered": 3, 00:17:31.843 "num_base_bdevs_operational": 3, 00:17:31.843 "base_bdevs_list": [ 00:17:31.843 { 00:17:31.843 "name": "NewBaseBdev", 00:17:31.843 "uuid": "61057788-9540-4881-ab6c-6d3a15f5a3ee", 00:17:31.843 "is_configured": true, 00:17:31.843 "data_offset": 0, 00:17:31.843 "data_size": 65536 00:17:31.843 }, 00:17:31.843 { 00:17:31.843 "name": "BaseBdev2", 00:17:31.843 "uuid": "62e534a1-ace0-4d28-bb0f-28cb092006a9", 00:17:31.843 "is_configured": true, 00:17:31.843 "data_offset": 0, 00:17:31.843 "data_size": 65536 00:17:31.843 }, 00:17:31.843 { 00:17:31.843 "name": "BaseBdev3", 00:17:31.843 "uuid": "a83665a7-81d6-4c98-acfe-e9e91115a0ef", 00:17:31.843 "is_configured": true, 00:17:31.843 "data_offset": 0, 00:17:31.843 "data_size": 65536 00:17:31.843 } 00:17:31.843 ] 00:17:31.843 }' 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.843 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.419 [2024-11-15 10:46:02.755172] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:32.419 "name": "Existed_Raid", 00:17:32.419 "aliases": [ 00:17:32.419 "2d5e7d71-ce2f-4894-8d50-7a6d67872a76" 00:17:32.419 ], 00:17:32.419 "product_name": "Raid Volume", 00:17:32.419 "block_size": 512, 00:17:32.419 "num_blocks": 131072, 00:17:32.419 "uuid": "2d5e7d71-ce2f-4894-8d50-7a6d67872a76", 00:17:32.419 "assigned_rate_limits": { 00:17:32.419 "rw_ios_per_sec": 0, 00:17:32.419 "rw_mbytes_per_sec": 0, 00:17:32.419 "r_mbytes_per_sec": 0, 00:17:32.419 "w_mbytes_per_sec": 0 00:17:32.419 }, 00:17:32.419 "claimed": false, 00:17:32.419 "zoned": false, 00:17:32.419 "supported_io_types": { 00:17:32.419 "read": true, 00:17:32.419 "write": true, 00:17:32.419 "unmap": false, 00:17:32.419 "flush": false, 00:17:32.419 "reset": true, 00:17:32.419 "nvme_admin": false, 00:17:32.419 "nvme_io": false, 00:17:32.419 "nvme_io_md": false, 00:17:32.419 "write_zeroes": true, 00:17:32.419 "zcopy": false, 00:17:32.419 "get_zone_info": false, 00:17:32.419 "zone_management": false, 00:17:32.419 "zone_append": false, 00:17:32.419 "compare": false, 00:17:32.419 "compare_and_write": false, 00:17:32.419 "abort": false, 00:17:32.419 "seek_hole": false, 00:17:32.419 "seek_data": false, 00:17:32.419 "copy": false, 00:17:32.419 "nvme_iov_md": false 00:17:32.419 }, 00:17:32.419 "driver_specific": { 00:17:32.419 "raid": { 00:17:32.419 "uuid": "2d5e7d71-ce2f-4894-8d50-7a6d67872a76", 00:17:32.419 "strip_size_kb": 64, 00:17:32.419 "state": "online", 00:17:32.419 "raid_level": "raid5f", 00:17:32.419 "superblock": false, 00:17:32.419 "num_base_bdevs": 3, 00:17:32.419 "num_base_bdevs_discovered": 3, 00:17:32.419 "num_base_bdevs_operational": 3, 00:17:32.419 "base_bdevs_list": [ 00:17:32.419 { 00:17:32.419 "name": "NewBaseBdev", 00:17:32.419 "uuid": "61057788-9540-4881-ab6c-6d3a15f5a3ee", 00:17:32.419 "is_configured": true, 00:17:32.419 "data_offset": 0, 00:17:32.419 "data_size": 65536 00:17:32.419 }, 00:17:32.419 { 00:17:32.419 "name": "BaseBdev2", 00:17:32.419 "uuid": "62e534a1-ace0-4d28-bb0f-28cb092006a9", 00:17:32.419 "is_configured": true, 00:17:32.419 "data_offset": 0, 00:17:32.419 "data_size": 65536 00:17:32.419 }, 00:17:32.419 { 00:17:32.419 "name": "BaseBdev3", 00:17:32.419 "uuid": "a83665a7-81d6-4c98-acfe-e9e91115a0ef", 00:17:32.419 "is_configured": true, 00:17:32.419 "data_offset": 0, 00:17:32.419 "data_size": 65536 00:17:32.419 } 00:17:32.419 ] 00:17:32.419 } 00:17:32.419 } 00:17:32.419 }' 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:32.419 BaseBdev2 00:17:32.419 BaseBdev3' 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.419 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.679 10:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.679 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.679 [2024-11-15 10:46:03.070971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:32.679 [2024-11-15 10:46:03.071017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.679 [2024-11-15 10:46:03.071117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.679 [2024-11-15 10:46:03.071511] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.680 [2024-11-15 10:46:03.071536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80383 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80383 ']' 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80383 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80383 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80383' 00:17:32.680 killing process with pid 80383 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80383 00:17:32.680 [2024-11-15 10:46:03.111968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.680 10:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80383 00:17:32.939 [2024-11-15 10:46:03.366669] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:33.876 00:17:33.876 real 0m11.824s 00:17:33.876 user 0m19.847s 00:17:33.876 sys 0m1.503s 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.876 ************************************ 00:17:33.876 END TEST raid5f_state_function_test 00:17:33.876 ************************************ 00:17:33.876 10:46:04 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:17:33.876 10:46:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:33.876 10:46:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:33.876 10:46:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.876 ************************************ 00:17:33.876 START TEST raid5f_state_function_test_sb 00:17:33.876 ************************************ 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:33.876 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:34.136 Process raid pid: 81015 00:17:34.136 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81015 00:17:34.136 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81015' 00:17:34.136 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81015 00:17:34.136 10:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:34.136 10:46:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 81015 ']' 00:17:34.136 10:46:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.136 10:46:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:34.136 10:46:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.136 10:46:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:34.136 10:46:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.136 [2024-11-15 10:46:04.521982] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:17:34.136 [2024-11-15 10:46:04.522277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.394 [2024-11-15 10:46:04.700449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.394 [2024-11-15 10:46:04.830628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.652 [2024-11-15 10:46:05.034766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.652 [2024-11-15 10:46:05.034818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.219 [2024-11-15 10:46:05.526159] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.219 [2024-11-15 10:46:05.526396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.219 [2024-11-15 10:46:05.526426] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.219 [2024-11-15 10:46:05.526446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.219 [2024-11-15 10:46:05.526456] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.219 [2024-11-15 10:46:05.526471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.219 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.219 "name": "Existed_Raid", 00:17:35.219 "uuid": "3e46614c-c19d-4a20-8c67-8523ff86c22c", 00:17:35.219 "strip_size_kb": 64, 00:17:35.219 "state": "configuring", 00:17:35.219 "raid_level": "raid5f", 00:17:35.219 "superblock": true, 00:17:35.219 "num_base_bdevs": 3, 00:17:35.219 "num_base_bdevs_discovered": 0, 00:17:35.219 "num_base_bdevs_operational": 3, 00:17:35.219 "base_bdevs_list": [ 00:17:35.220 { 00:17:35.220 "name": "BaseBdev1", 00:17:35.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.220 "is_configured": false, 00:17:35.220 "data_offset": 0, 00:17:35.220 "data_size": 0 00:17:35.220 }, 00:17:35.220 { 00:17:35.220 "name": "BaseBdev2", 00:17:35.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.220 "is_configured": false, 00:17:35.220 "data_offset": 0, 00:17:35.220 "data_size": 0 00:17:35.220 }, 00:17:35.220 { 00:17:35.220 "name": "BaseBdev3", 00:17:35.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.220 "is_configured": false, 00:17:35.220 "data_offset": 0, 00:17:35.220 "data_size": 0 00:17:35.220 } 00:17:35.220 ] 00:17:35.220 }' 00:17:35.220 10:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.220 10:46:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 [2024-11-15 10:46:06.062214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.787 [2024-11-15 10:46:06.062261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 [2024-11-15 10:46:06.070218] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.787 [2024-11-15 10:46:06.070275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.787 [2024-11-15 10:46:06.070292] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.787 [2024-11-15 10:46:06.070308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.787 [2024-11-15 10:46:06.070318] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.787 [2024-11-15 10:46:06.070332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 [2024-11-15 10:46:06.110513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.787 BaseBdev1 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 [ 00:17:35.787 { 00:17:35.787 "name": "BaseBdev1", 00:17:35.787 "aliases": [ 00:17:35.787 "6acfd124-8b66-4903-bc58-f9324ba31483" 00:17:35.787 ], 00:17:35.787 "product_name": "Malloc disk", 00:17:35.787 "block_size": 512, 00:17:35.787 "num_blocks": 65536, 00:17:35.787 "uuid": "6acfd124-8b66-4903-bc58-f9324ba31483", 00:17:35.787 "assigned_rate_limits": { 00:17:35.787 "rw_ios_per_sec": 0, 00:17:35.787 "rw_mbytes_per_sec": 0, 00:17:35.787 "r_mbytes_per_sec": 0, 00:17:35.787 "w_mbytes_per_sec": 0 00:17:35.787 }, 00:17:35.787 "claimed": true, 00:17:35.787 "claim_type": "exclusive_write", 00:17:35.787 "zoned": false, 00:17:35.787 "supported_io_types": { 00:17:35.787 "read": true, 00:17:35.787 "write": true, 00:17:35.787 "unmap": true, 00:17:35.787 "flush": true, 00:17:35.787 "reset": true, 00:17:35.787 "nvme_admin": false, 00:17:35.787 "nvme_io": false, 00:17:35.787 "nvme_io_md": false, 00:17:35.787 "write_zeroes": true, 00:17:35.787 "zcopy": true, 00:17:35.787 "get_zone_info": false, 00:17:35.787 "zone_management": false, 00:17:35.787 "zone_append": false, 00:17:35.787 "compare": false, 00:17:35.787 "compare_and_write": false, 00:17:35.787 "abort": true, 00:17:35.787 "seek_hole": false, 00:17:35.787 "seek_data": false, 00:17:35.787 "copy": true, 00:17:35.787 "nvme_iov_md": false 00:17:35.787 }, 00:17:35.787 "memory_domains": [ 00:17:35.787 { 00:17:35.787 "dma_device_id": "system", 00:17:35.787 "dma_device_type": 1 00:17:35.787 }, 00:17:35.787 { 00:17:35.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.787 "dma_device_type": 2 00:17:35.787 } 00:17:35.787 ], 00:17:35.787 "driver_specific": {} 00:17:35.787 } 00:17:35.787 ] 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.787 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.787 "name": "Existed_Raid", 00:17:35.787 "uuid": "a7dcfc42-307e-4136-8430-6f551f83a2af", 00:17:35.787 "strip_size_kb": 64, 00:17:35.787 "state": "configuring", 00:17:35.787 "raid_level": "raid5f", 00:17:35.787 "superblock": true, 00:17:35.787 "num_base_bdevs": 3, 00:17:35.787 "num_base_bdevs_discovered": 1, 00:17:35.787 "num_base_bdevs_operational": 3, 00:17:35.787 "base_bdevs_list": [ 00:17:35.787 { 00:17:35.788 "name": "BaseBdev1", 00:17:35.788 "uuid": "6acfd124-8b66-4903-bc58-f9324ba31483", 00:17:35.788 "is_configured": true, 00:17:35.788 "data_offset": 2048, 00:17:35.788 "data_size": 63488 00:17:35.788 }, 00:17:35.788 { 00:17:35.788 "name": "BaseBdev2", 00:17:35.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.788 "is_configured": false, 00:17:35.788 "data_offset": 0, 00:17:35.788 "data_size": 0 00:17:35.788 }, 00:17:35.788 { 00:17:35.788 "name": "BaseBdev3", 00:17:35.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.788 "is_configured": false, 00:17:35.788 "data_offset": 0, 00:17:35.788 "data_size": 0 00:17:35.788 } 00:17:35.788 ] 00:17:35.788 }' 00:17:35.788 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.788 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.373 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:36.373 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.373 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.373 [2024-11-15 10:46:06.662731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.373 [2024-11-15 10:46:06.662947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:36.373 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.373 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:36.373 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.373 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.373 [2024-11-15 10:46:06.670786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.373 [2024-11-15 10:46:06.673113] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.373 [2024-11-15 10:46:06.673167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.373 [2024-11-15 10:46:06.673184] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:36.373 [2024-11-15 10:46:06.673199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:36.373 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.373 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.374 "name": "Existed_Raid", 00:17:36.374 "uuid": "3d9b9de9-5290-4657-855e-e27d00d6f4a1", 00:17:36.374 "strip_size_kb": 64, 00:17:36.374 "state": "configuring", 00:17:36.374 "raid_level": "raid5f", 00:17:36.374 "superblock": true, 00:17:36.374 "num_base_bdevs": 3, 00:17:36.374 "num_base_bdevs_discovered": 1, 00:17:36.374 "num_base_bdevs_operational": 3, 00:17:36.374 "base_bdevs_list": [ 00:17:36.374 { 00:17:36.374 "name": "BaseBdev1", 00:17:36.374 "uuid": "6acfd124-8b66-4903-bc58-f9324ba31483", 00:17:36.374 "is_configured": true, 00:17:36.374 "data_offset": 2048, 00:17:36.374 "data_size": 63488 00:17:36.374 }, 00:17:36.374 { 00:17:36.374 "name": "BaseBdev2", 00:17:36.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.374 "is_configured": false, 00:17:36.374 "data_offset": 0, 00:17:36.374 "data_size": 0 00:17:36.374 }, 00:17:36.374 { 00:17:36.374 "name": "BaseBdev3", 00:17:36.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.374 "is_configured": false, 00:17:36.374 "data_offset": 0, 00:17:36.374 "data_size": 0 00:17:36.374 } 00:17:36.374 ] 00:17:36.374 }' 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.374 10:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.942 [2024-11-15 10:46:07.229896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.942 BaseBdev2 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.942 [ 00:17:36.942 { 00:17:36.942 "name": "BaseBdev2", 00:17:36.942 "aliases": [ 00:17:36.942 "47095fdb-3c4b-4a3d-b418-b5fed3968879" 00:17:36.942 ], 00:17:36.942 "product_name": "Malloc disk", 00:17:36.942 "block_size": 512, 00:17:36.942 "num_blocks": 65536, 00:17:36.942 "uuid": "47095fdb-3c4b-4a3d-b418-b5fed3968879", 00:17:36.942 "assigned_rate_limits": { 00:17:36.942 "rw_ios_per_sec": 0, 00:17:36.942 "rw_mbytes_per_sec": 0, 00:17:36.942 "r_mbytes_per_sec": 0, 00:17:36.942 "w_mbytes_per_sec": 0 00:17:36.942 }, 00:17:36.942 "claimed": true, 00:17:36.942 "claim_type": "exclusive_write", 00:17:36.942 "zoned": false, 00:17:36.942 "supported_io_types": { 00:17:36.942 "read": true, 00:17:36.942 "write": true, 00:17:36.942 "unmap": true, 00:17:36.942 "flush": true, 00:17:36.942 "reset": true, 00:17:36.942 "nvme_admin": false, 00:17:36.942 "nvme_io": false, 00:17:36.942 "nvme_io_md": false, 00:17:36.942 "write_zeroes": true, 00:17:36.942 "zcopy": true, 00:17:36.942 "get_zone_info": false, 00:17:36.942 "zone_management": false, 00:17:36.942 "zone_append": false, 00:17:36.942 "compare": false, 00:17:36.942 "compare_and_write": false, 00:17:36.942 "abort": true, 00:17:36.942 "seek_hole": false, 00:17:36.942 "seek_data": false, 00:17:36.942 "copy": true, 00:17:36.942 "nvme_iov_md": false 00:17:36.942 }, 00:17:36.942 "memory_domains": [ 00:17:36.942 { 00:17:36.942 "dma_device_id": "system", 00:17:36.942 "dma_device_type": 1 00:17:36.942 }, 00:17:36.942 { 00:17:36.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.942 "dma_device_type": 2 00:17:36.942 } 00:17:36.942 ], 00:17:36.942 "driver_specific": {} 00:17:36.942 } 00:17:36.942 ] 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.942 "name": "Existed_Raid", 00:17:36.942 "uuid": "3d9b9de9-5290-4657-855e-e27d00d6f4a1", 00:17:36.942 "strip_size_kb": 64, 00:17:36.942 "state": "configuring", 00:17:36.942 "raid_level": "raid5f", 00:17:36.942 "superblock": true, 00:17:36.942 "num_base_bdevs": 3, 00:17:36.942 "num_base_bdevs_discovered": 2, 00:17:36.942 "num_base_bdevs_operational": 3, 00:17:36.942 "base_bdevs_list": [ 00:17:36.942 { 00:17:36.942 "name": "BaseBdev1", 00:17:36.942 "uuid": "6acfd124-8b66-4903-bc58-f9324ba31483", 00:17:36.942 "is_configured": true, 00:17:36.942 "data_offset": 2048, 00:17:36.942 "data_size": 63488 00:17:36.942 }, 00:17:36.942 { 00:17:36.942 "name": "BaseBdev2", 00:17:36.942 "uuid": "47095fdb-3c4b-4a3d-b418-b5fed3968879", 00:17:36.942 "is_configured": true, 00:17:36.942 "data_offset": 2048, 00:17:36.942 "data_size": 63488 00:17:36.942 }, 00:17:36.942 { 00:17:36.942 "name": "BaseBdev3", 00:17:36.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.942 "is_configured": false, 00:17:36.942 "data_offset": 0, 00:17:36.942 "data_size": 0 00:17:36.942 } 00:17:36.942 ] 00:17:36.942 }' 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.942 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.510 [2024-11-15 10:46:07.830608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.510 [2024-11-15 10:46:07.830929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:37.510 [2024-11-15 10:46:07.830973] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:37.510 BaseBdev3 00:17:37.510 [2024-11-15 10:46:07.831298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.510 [2024-11-15 10:46:07.836698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:37.510 [2024-11-15 10:46:07.836729] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:37.510 [2024-11-15 10:46:07.837108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.510 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.510 [ 00:17:37.510 { 00:17:37.510 "name": "BaseBdev3", 00:17:37.510 "aliases": [ 00:17:37.510 "bc10bcdf-0496-499a-848f-4111f33f94cc" 00:17:37.510 ], 00:17:37.510 "product_name": "Malloc disk", 00:17:37.510 "block_size": 512, 00:17:37.510 "num_blocks": 65536, 00:17:37.510 "uuid": "bc10bcdf-0496-499a-848f-4111f33f94cc", 00:17:37.510 "assigned_rate_limits": { 00:17:37.510 "rw_ios_per_sec": 0, 00:17:37.510 "rw_mbytes_per_sec": 0, 00:17:37.510 "r_mbytes_per_sec": 0, 00:17:37.510 "w_mbytes_per_sec": 0 00:17:37.511 }, 00:17:37.511 "claimed": true, 00:17:37.511 "claim_type": "exclusive_write", 00:17:37.511 "zoned": false, 00:17:37.511 "supported_io_types": { 00:17:37.511 "read": true, 00:17:37.511 "write": true, 00:17:37.511 "unmap": true, 00:17:37.511 "flush": true, 00:17:37.511 "reset": true, 00:17:37.511 "nvme_admin": false, 00:17:37.511 "nvme_io": false, 00:17:37.511 "nvme_io_md": false, 00:17:37.511 "write_zeroes": true, 00:17:37.511 "zcopy": true, 00:17:37.511 "get_zone_info": false, 00:17:37.511 "zone_management": false, 00:17:37.511 "zone_append": false, 00:17:37.511 "compare": false, 00:17:37.511 "compare_and_write": false, 00:17:37.511 "abort": true, 00:17:37.511 "seek_hole": false, 00:17:37.511 "seek_data": false, 00:17:37.511 "copy": true, 00:17:37.511 "nvme_iov_md": false 00:17:37.511 }, 00:17:37.511 "memory_domains": [ 00:17:37.511 { 00:17:37.511 "dma_device_id": "system", 00:17:37.511 "dma_device_type": 1 00:17:37.511 }, 00:17:37.511 { 00:17:37.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.511 "dma_device_type": 2 00:17:37.511 } 00:17:37.511 ], 00:17:37.511 "driver_specific": {} 00:17:37.511 } 00:17:37.511 ] 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.511 "name": "Existed_Raid", 00:17:37.511 "uuid": "3d9b9de9-5290-4657-855e-e27d00d6f4a1", 00:17:37.511 "strip_size_kb": 64, 00:17:37.511 "state": "online", 00:17:37.511 "raid_level": "raid5f", 00:17:37.511 "superblock": true, 00:17:37.511 "num_base_bdevs": 3, 00:17:37.511 "num_base_bdevs_discovered": 3, 00:17:37.511 "num_base_bdevs_operational": 3, 00:17:37.511 "base_bdevs_list": [ 00:17:37.511 { 00:17:37.511 "name": "BaseBdev1", 00:17:37.511 "uuid": "6acfd124-8b66-4903-bc58-f9324ba31483", 00:17:37.511 "is_configured": true, 00:17:37.511 "data_offset": 2048, 00:17:37.511 "data_size": 63488 00:17:37.511 }, 00:17:37.511 { 00:17:37.511 "name": "BaseBdev2", 00:17:37.511 "uuid": "47095fdb-3c4b-4a3d-b418-b5fed3968879", 00:17:37.511 "is_configured": true, 00:17:37.511 "data_offset": 2048, 00:17:37.511 "data_size": 63488 00:17:37.511 }, 00:17:37.511 { 00:17:37.511 "name": "BaseBdev3", 00:17:37.511 "uuid": "bc10bcdf-0496-499a-848f-4111f33f94cc", 00:17:37.511 "is_configured": true, 00:17:37.511 "data_offset": 2048, 00:17:37.511 "data_size": 63488 00:17:37.511 } 00:17:37.511 ] 00:17:37.511 }' 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.511 10:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.078 [2024-11-15 10:46:08.400236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:38.078 "name": "Existed_Raid", 00:17:38.078 "aliases": [ 00:17:38.078 "3d9b9de9-5290-4657-855e-e27d00d6f4a1" 00:17:38.078 ], 00:17:38.078 "product_name": "Raid Volume", 00:17:38.078 "block_size": 512, 00:17:38.078 "num_blocks": 126976, 00:17:38.078 "uuid": "3d9b9de9-5290-4657-855e-e27d00d6f4a1", 00:17:38.078 "assigned_rate_limits": { 00:17:38.078 "rw_ios_per_sec": 0, 00:17:38.078 "rw_mbytes_per_sec": 0, 00:17:38.078 "r_mbytes_per_sec": 0, 00:17:38.078 "w_mbytes_per_sec": 0 00:17:38.078 }, 00:17:38.078 "claimed": false, 00:17:38.078 "zoned": false, 00:17:38.078 "supported_io_types": { 00:17:38.078 "read": true, 00:17:38.078 "write": true, 00:17:38.078 "unmap": false, 00:17:38.078 "flush": false, 00:17:38.078 "reset": true, 00:17:38.078 "nvme_admin": false, 00:17:38.078 "nvme_io": false, 00:17:38.078 "nvme_io_md": false, 00:17:38.078 "write_zeroes": true, 00:17:38.078 "zcopy": false, 00:17:38.078 "get_zone_info": false, 00:17:38.078 "zone_management": false, 00:17:38.078 "zone_append": false, 00:17:38.078 "compare": false, 00:17:38.078 "compare_and_write": false, 00:17:38.078 "abort": false, 00:17:38.078 "seek_hole": false, 00:17:38.078 "seek_data": false, 00:17:38.078 "copy": false, 00:17:38.078 "nvme_iov_md": false 00:17:38.078 }, 00:17:38.078 "driver_specific": { 00:17:38.078 "raid": { 00:17:38.078 "uuid": "3d9b9de9-5290-4657-855e-e27d00d6f4a1", 00:17:38.078 "strip_size_kb": 64, 00:17:38.078 "state": "online", 00:17:38.078 "raid_level": "raid5f", 00:17:38.078 "superblock": true, 00:17:38.078 "num_base_bdevs": 3, 00:17:38.078 "num_base_bdevs_discovered": 3, 00:17:38.078 "num_base_bdevs_operational": 3, 00:17:38.078 "base_bdevs_list": [ 00:17:38.078 { 00:17:38.078 "name": "BaseBdev1", 00:17:38.078 "uuid": "6acfd124-8b66-4903-bc58-f9324ba31483", 00:17:38.078 "is_configured": true, 00:17:38.078 "data_offset": 2048, 00:17:38.078 "data_size": 63488 00:17:38.078 }, 00:17:38.078 { 00:17:38.078 "name": "BaseBdev2", 00:17:38.078 "uuid": "47095fdb-3c4b-4a3d-b418-b5fed3968879", 00:17:38.078 "is_configured": true, 00:17:38.078 "data_offset": 2048, 00:17:38.078 "data_size": 63488 00:17:38.078 }, 00:17:38.078 { 00:17:38.078 "name": "BaseBdev3", 00:17:38.078 "uuid": "bc10bcdf-0496-499a-848f-4111f33f94cc", 00:17:38.078 "is_configured": true, 00:17:38.078 "data_offset": 2048, 00:17:38.078 "data_size": 63488 00:17:38.078 } 00:17:38.078 ] 00:17:38.078 } 00:17:38.078 } 00:17:38.078 }' 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:38.078 BaseBdev2 00:17:38.078 BaseBdev3' 00:17:38.078 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.079 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.337 [2024-11-15 10:46:08.744108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.337 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.338 "name": "Existed_Raid", 00:17:38.338 "uuid": "3d9b9de9-5290-4657-855e-e27d00d6f4a1", 00:17:38.338 "strip_size_kb": 64, 00:17:38.338 "state": "online", 00:17:38.338 "raid_level": "raid5f", 00:17:38.338 "superblock": true, 00:17:38.338 "num_base_bdevs": 3, 00:17:38.338 "num_base_bdevs_discovered": 2, 00:17:38.338 "num_base_bdevs_operational": 2, 00:17:38.338 "base_bdevs_list": [ 00:17:38.338 { 00:17:38.338 "name": null, 00:17:38.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.338 "is_configured": false, 00:17:38.338 "data_offset": 0, 00:17:38.338 "data_size": 63488 00:17:38.338 }, 00:17:38.338 { 00:17:38.338 "name": "BaseBdev2", 00:17:38.338 "uuid": "47095fdb-3c4b-4a3d-b418-b5fed3968879", 00:17:38.338 "is_configured": true, 00:17:38.338 "data_offset": 2048, 00:17:38.338 "data_size": 63488 00:17:38.338 }, 00:17:38.338 { 00:17:38.338 "name": "BaseBdev3", 00:17:38.338 "uuid": "bc10bcdf-0496-499a-848f-4111f33f94cc", 00:17:38.338 "is_configured": true, 00:17:38.338 "data_offset": 2048, 00:17:38.338 "data_size": 63488 00:17:38.338 } 00:17:38.338 ] 00:17:38.338 }' 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.338 10:46:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.941 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:38.941 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.941 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.941 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:38.941 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.941 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.942 [2024-11-15 10:46:09.404245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:38.942 [2024-11-15 10:46:09.404583] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.942 [2024-11-15 10:46:09.484780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.942 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.200 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.200 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.201 [2024-11-15 10:46:09.540834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:39.201 [2024-11-15 10:46:09.540893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.201 BaseBdev2 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.201 [ 00:17:39.201 { 00:17:39.201 "name": "BaseBdev2", 00:17:39.201 "aliases": [ 00:17:39.201 "a6264782-5ce6-448f-a76b-63e21f711e64" 00:17:39.201 ], 00:17:39.201 "product_name": "Malloc disk", 00:17:39.201 "block_size": 512, 00:17:39.201 "num_blocks": 65536, 00:17:39.201 "uuid": "a6264782-5ce6-448f-a76b-63e21f711e64", 00:17:39.201 "assigned_rate_limits": { 00:17:39.201 "rw_ios_per_sec": 0, 00:17:39.201 "rw_mbytes_per_sec": 0, 00:17:39.201 "r_mbytes_per_sec": 0, 00:17:39.201 "w_mbytes_per_sec": 0 00:17:39.201 }, 00:17:39.201 "claimed": false, 00:17:39.201 "zoned": false, 00:17:39.201 "supported_io_types": { 00:17:39.201 "read": true, 00:17:39.201 "write": true, 00:17:39.201 "unmap": true, 00:17:39.201 "flush": true, 00:17:39.201 "reset": true, 00:17:39.201 "nvme_admin": false, 00:17:39.201 "nvme_io": false, 00:17:39.201 "nvme_io_md": false, 00:17:39.201 "write_zeroes": true, 00:17:39.201 "zcopy": true, 00:17:39.201 "get_zone_info": false, 00:17:39.201 "zone_management": false, 00:17:39.201 "zone_append": false, 00:17:39.201 "compare": false, 00:17:39.201 "compare_and_write": false, 00:17:39.201 "abort": true, 00:17:39.201 "seek_hole": false, 00:17:39.201 "seek_data": false, 00:17:39.201 "copy": true, 00:17:39.201 "nvme_iov_md": false 00:17:39.201 }, 00:17:39.201 "memory_domains": [ 00:17:39.201 { 00:17:39.201 "dma_device_id": "system", 00:17:39.201 "dma_device_type": 1 00:17:39.201 }, 00:17:39.201 { 00:17:39.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.201 "dma_device_type": 2 00:17:39.201 } 00:17:39.201 ], 00:17:39.201 "driver_specific": {} 00:17:39.201 } 00:17:39.201 ] 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.201 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.460 BaseBdev3 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.460 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.460 [ 00:17:39.460 { 00:17:39.460 "name": "BaseBdev3", 00:17:39.460 "aliases": [ 00:17:39.460 "87a440db-7250-4fc6-b522-b53703f538f5" 00:17:39.460 ], 00:17:39.460 "product_name": "Malloc disk", 00:17:39.460 "block_size": 512, 00:17:39.460 "num_blocks": 65536, 00:17:39.460 "uuid": "87a440db-7250-4fc6-b522-b53703f538f5", 00:17:39.460 "assigned_rate_limits": { 00:17:39.460 "rw_ios_per_sec": 0, 00:17:39.460 "rw_mbytes_per_sec": 0, 00:17:39.460 "r_mbytes_per_sec": 0, 00:17:39.460 "w_mbytes_per_sec": 0 00:17:39.460 }, 00:17:39.460 "claimed": false, 00:17:39.460 "zoned": false, 00:17:39.460 "supported_io_types": { 00:17:39.461 "read": true, 00:17:39.461 "write": true, 00:17:39.461 "unmap": true, 00:17:39.461 "flush": true, 00:17:39.461 "reset": true, 00:17:39.461 "nvme_admin": false, 00:17:39.461 "nvme_io": false, 00:17:39.461 "nvme_io_md": false, 00:17:39.461 "write_zeroes": true, 00:17:39.461 "zcopy": true, 00:17:39.461 "get_zone_info": false, 00:17:39.461 "zone_management": false, 00:17:39.461 "zone_append": false, 00:17:39.461 "compare": false, 00:17:39.461 "compare_and_write": false, 00:17:39.461 "abort": true, 00:17:39.461 "seek_hole": false, 00:17:39.461 "seek_data": false, 00:17:39.461 "copy": true, 00:17:39.461 "nvme_iov_md": false 00:17:39.461 }, 00:17:39.461 "memory_domains": [ 00:17:39.461 { 00:17:39.461 "dma_device_id": "system", 00:17:39.461 "dma_device_type": 1 00:17:39.461 }, 00:17:39.461 { 00:17:39.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.461 "dma_device_type": 2 00:17:39.461 } 00:17:39.461 ], 00:17:39.461 "driver_specific": {} 00:17:39.461 } 00:17:39.461 ] 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.461 [2024-11-15 10:46:09.820669] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.461 [2024-11-15 10:46:09.820853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.461 [2024-11-15 10:46:09.820990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.461 [2024-11-15 10:46:09.823303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.461 "name": "Existed_Raid", 00:17:39.461 "uuid": "17b45923-d1e3-4efe-8245-bc26895a5efb", 00:17:39.461 "strip_size_kb": 64, 00:17:39.461 "state": "configuring", 00:17:39.461 "raid_level": "raid5f", 00:17:39.461 "superblock": true, 00:17:39.461 "num_base_bdevs": 3, 00:17:39.461 "num_base_bdevs_discovered": 2, 00:17:39.461 "num_base_bdevs_operational": 3, 00:17:39.461 "base_bdevs_list": [ 00:17:39.461 { 00:17:39.461 "name": "BaseBdev1", 00:17:39.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.461 "is_configured": false, 00:17:39.461 "data_offset": 0, 00:17:39.461 "data_size": 0 00:17:39.461 }, 00:17:39.461 { 00:17:39.461 "name": "BaseBdev2", 00:17:39.461 "uuid": "a6264782-5ce6-448f-a76b-63e21f711e64", 00:17:39.461 "is_configured": true, 00:17:39.461 "data_offset": 2048, 00:17:39.461 "data_size": 63488 00:17:39.461 }, 00:17:39.461 { 00:17:39.461 "name": "BaseBdev3", 00:17:39.461 "uuid": "87a440db-7250-4fc6-b522-b53703f538f5", 00:17:39.461 "is_configured": true, 00:17:39.461 "data_offset": 2048, 00:17:39.461 "data_size": 63488 00:17:39.461 } 00:17:39.461 ] 00:17:39.461 }' 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.461 10:46:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.028 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.029 [2024-11-15 10:46:10.328826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.029 "name": "Existed_Raid", 00:17:40.029 "uuid": "17b45923-d1e3-4efe-8245-bc26895a5efb", 00:17:40.029 "strip_size_kb": 64, 00:17:40.029 "state": "configuring", 00:17:40.029 "raid_level": "raid5f", 00:17:40.029 "superblock": true, 00:17:40.029 "num_base_bdevs": 3, 00:17:40.029 "num_base_bdevs_discovered": 1, 00:17:40.029 "num_base_bdevs_operational": 3, 00:17:40.029 "base_bdevs_list": [ 00:17:40.029 { 00:17:40.029 "name": "BaseBdev1", 00:17:40.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.029 "is_configured": false, 00:17:40.029 "data_offset": 0, 00:17:40.029 "data_size": 0 00:17:40.029 }, 00:17:40.029 { 00:17:40.029 "name": null, 00:17:40.029 "uuid": "a6264782-5ce6-448f-a76b-63e21f711e64", 00:17:40.029 "is_configured": false, 00:17:40.029 "data_offset": 0, 00:17:40.029 "data_size": 63488 00:17:40.029 }, 00:17:40.029 { 00:17:40.029 "name": "BaseBdev3", 00:17:40.029 "uuid": "87a440db-7250-4fc6-b522-b53703f538f5", 00:17:40.029 "is_configured": true, 00:17:40.029 "data_offset": 2048, 00:17:40.029 "data_size": 63488 00:17:40.029 } 00:17:40.029 ] 00:17:40.029 }' 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.029 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.288 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.288 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:40.288 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.288 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.548 [2024-11-15 10:46:10.914757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.548 BaseBdev1 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.548 [ 00:17:40.548 { 00:17:40.548 "name": "BaseBdev1", 00:17:40.548 "aliases": [ 00:17:40.548 "4d777ba2-109d-40b3-acff-64d6a552d686" 00:17:40.548 ], 00:17:40.548 "product_name": "Malloc disk", 00:17:40.548 "block_size": 512, 00:17:40.548 "num_blocks": 65536, 00:17:40.548 "uuid": "4d777ba2-109d-40b3-acff-64d6a552d686", 00:17:40.548 "assigned_rate_limits": { 00:17:40.548 "rw_ios_per_sec": 0, 00:17:40.548 "rw_mbytes_per_sec": 0, 00:17:40.548 "r_mbytes_per_sec": 0, 00:17:40.548 "w_mbytes_per_sec": 0 00:17:40.548 }, 00:17:40.548 "claimed": true, 00:17:40.548 "claim_type": "exclusive_write", 00:17:40.548 "zoned": false, 00:17:40.548 "supported_io_types": { 00:17:40.548 "read": true, 00:17:40.548 "write": true, 00:17:40.548 "unmap": true, 00:17:40.548 "flush": true, 00:17:40.548 "reset": true, 00:17:40.548 "nvme_admin": false, 00:17:40.548 "nvme_io": false, 00:17:40.548 "nvme_io_md": false, 00:17:40.548 "write_zeroes": true, 00:17:40.548 "zcopy": true, 00:17:40.548 "get_zone_info": false, 00:17:40.548 "zone_management": false, 00:17:40.548 "zone_append": false, 00:17:40.548 "compare": false, 00:17:40.548 "compare_and_write": false, 00:17:40.548 "abort": true, 00:17:40.548 "seek_hole": false, 00:17:40.548 "seek_data": false, 00:17:40.548 "copy": true, 00:17:40.548 "nvme_iov_md": false 00:17:40.548 }, 00:17:40.548 "memory_domains": [ 00:17:40.548 { 00:17:40.548 "dma_device_id": "system", 00:17:40.548 "dma_device_type": 1 00:17:40.548 }, 00:17:40.548 { 00:17:40.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.548 "dma_device_type": 2 00:17:40.548 } 00:17:40.548 ], 00:17:40.548 "driver_specific": {} 00:17:40.548 } 00:17:40.548 ] 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.548 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.549 "name": "Existed_Raid", 00:17:40.549 "uuid": "17b45923-d1e3-4efe-8245-bc26895a5efb", 00:17:40.549 "strip_size_kb": 64, 00:17:40.549 "state": "configuring", 00:17:40.549 "raid_level": "raid5f", 00:17:40.549 "superblock": true, 00:17:40.549 "num_base_bdevs": 3, 00:17:40.549 "num_base_bdevs_discovered": 2, 00:17:40.549 "num_base_bdevs_operational": 3, 00:17:40.549 "base_bdevs_list": [ 00:17:40.549 { 00:17:40.549 "name": "BaseBdev1", 00:17:40.549 "uuid": "4d777ba2-109d-40b3-acff-64d6a552d686", 00:17:40.549 "is_configured": true, 00:17:40.549 "data_offset": 2048, 00:17:40.549 "data_size": 63488 00:17:40.549 }, 00:17:40.549 { 00:17:40.549 "name": null, 00:17:40.549 "uuid": "a6264782-5ce6-448f-a76b-63e21f711e64", 00:17:40.549 "is_configured": false, 00:17:40.549 "data_offset": 0, 00:17:40.549 "data_size": 63488 00:17:40.549 }, 00:17:40.549 { 00:17:40.549 "name": "BaseBdev3", 00:17:40.549 "uuid": "87a440db-7250-4fc6-b522-b53703f538f5", 00:17:40.549 "is_configured": true, 00:17:40.549 "data_offset": 2048, 00:17:40.549 "data_size": 63488 00:17:40.549 } 00:17:40.549 ] 00:17:40.549 }' 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.549 10:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.116 [2024-11-15 10:46:11.495007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.116 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.116 "name": "Existed_Raid", 00:17:41.116 "uuid": "17b45923-d1e3-4efe-8245-bc26895a5efb", 00:17:41.116 "strip_size_kb": 64, 00:17:41.116 "state": "configuring", 00:17:41.116 "raid_level": "raid5f", 00:17:41.116 "superblock": true, 00:17:41.116 "num_base_bdevs": 3, 00:17:41.116 "num_base_bdevs_discovered": 1, 00:17:41.116 "num_base_bdevs_operational": 3, 00:17:41.116 "base_bdevs_list": [ 00:17:41.116 { 00:17:41.116 "name": "BaseBdev1", 00:17:41.116 "uuid": "4d777ba2-109d-40b3-acff-64d6a552d686", 00:17:41.116 "is_configured": true, 00:17:41.116 "data_offset": 2048, 00:17:41.116 "data_size": 63488 00:17:41.116 }, 00:17:41.116 { 00:17:41.116 "name": null, 00:17:41.116 "uuid": "a6264782-5ce6-448f-a76b-63e21f711e64", 00:17:41.116 "is_configured": false, 00:17:41.117 "data_offset": 0, 00:17:41.117 "data_size": 63488 00:17:41.117 }, 00:17:41.117 { 00:17:41.117 "name": null, 00:17:41.117 "uuid": "87a440db-7250-4fc6-b522-b53703f538f5", 00:17:41.117 "is_configured": false, 00:17:41.117 "data_offset": 0, 00:17:41.117 "data_size": 63488 00:17:41.117 } 00:17:41.117 ] 00:17:41.117 }' 00:17:41.117 10:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.117 10:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.684 [2024-11-15 10:46:12.079181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.684 "name": "Existed_Raid", 00:17:41.684 "uuid": "17b45923-d1e3-4efe-8245-bc26895a5efb", 00:17:41.684 "strip_size_kb": 64, 00:17:41.684 "state": "configuring", 00:17:41.684 "raid_level": "raid5f", 00:17:41.684 "superblock": true, 00:17:41.684 "num_base_bdevs": 3, 00:17:41.684 "num_base_bdevs_discovered": 2, 00:17:41.684 "num_base_bdevs_operational": 3, 00:17:41.684 "base_bdevs_list": [ 00:17:41.684 { 00:17:41.684 "name": "BaseBdev1", 00:17:41.684 "uuid": "4d777ba2-109d-40b3-acff-64d6a552d686", 00:17:41.684 "is_configured": true, 00:17:41.684 "data_offset": 2048, 00:17:41.684 "data_size": 63488 00:17:41.684 }, 00:17:41.684 { 00:17:41.684 "name": null, 00:17:41.684 "uuid": "a6264782-5ce6-448f-a76b-63e21f711e64", 00:17:41.684 "is_configured": false, 00:17:41.684 "data_offset": 0, 00:17:41.684 "data_size": 63488 00:17:41.684 }, 00:17:41.684 { 00:17:41.684 "name": "BaseBdev3", 00:17:41.684 "uuid": "87a440db-7250-4fc6-b522-b53703f538f5", 00:17:41.684 "is_configured": true, 00:17:41.684 "data_offset": 2048, 00:17:41.684 "data_size": 63488 00:17:41.684 } 00:17:41.684 ] 00:17:41.684 }' 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.684 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.251 [2024-11-15 10:46:12.655375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.251 "name": "Existed_Raid", 00:17:42.251 "uuid": "17b45923-d1e3-4efe-8245-bc26895a5efb", 00:17:42.251 "strip_size_kb": 64, 00:17:42.251 "state": "configuring", 00:17:42.251 "raid_level": "raid5f", 00:17:42.251 "superblock": true, 00:17:42.251 "num_base_bdevs": 3, 00:17:42.251 "num_base_bdevs_discovered": 1, 00:17:42.251 "num_base_bdevs_operational": 3, 00:17:42.251 "base_bdevs_list": [ 00:17:42.251 { 00:17:42.251 "name": null, 00:17:42.251 "uuid": "4d777ba2-109d-40b3-acff-64d6a552d686", 00:17:42.251 "is_configured": false, 00:17:42.251 "data_offset": 0, 00:17:42.251 "data_size": 63488 00:17:42.251 }, 00:17:42.251 { 00:17:42.251 "name": null, 00:17:42.251 "uuid": "a6264782-5ce6-448f-a76b-63e21f711e64", 00:17:42.251 "is_configured": false, 00:17:42.251 "data_offset": 0, 00:17:42.251 "data_size": 63488 00:17:42.251 }, 00:17:42.251 { 00:17:42.251 "name": "BaseBdev3", 00:17:42.251 "uuid": "87a440db-7250-4fc6-b522-b53703f538f5", 00:17:42.251 "is_configured": true, 00:17:42.251 "data_offset": 2048, 00:17:42.251 "data_size": 63488 00:17:42.251 } 00:17:42.251 ] 00:17:42.251 }' 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.251 10:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.818 [2024-11-15 10:46:13.316181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.818 "name": "Existed_Raid", 00:17:42.818 "uuid": "17b45923-d1e3-4efe-8245-bc26895a5efb", 00:17:42.818 "strip_size_kb": 64, 00:17:42.818 "state": "configuring", 00:17:42.818 "raid_level": "raid5f", 00:17:42.818 "superblock": true, 00:17:42.818 "num_base_bdevs": 3, 00:17:42.818 "num_base_bdevs_discovered": 2, 00:17:42.818 "num_base_bdevs_operational": 3, 00:17:42.818 "base_bdevs_list": [ 00:17:42.818 { 00:17:42.818 "name": null, 00:17:42.818 "uuid": "4d777ba2-109d-40b3-acff-64d6a552d686", 00:17:42.818 "is_configured": false, 00:17:42.818 "data_offset": 0, 00:17:42.818 "data_size": 63488 00:17:42.818 }, 00:17:42.818 { 00:17:42.818 "name": "BaseBdev2", 00:17:42.818 "uuid": "a6264782-5ce6-448f-a76b-63e21f711e64", 00:17:42.818 "is_configured": true, 00:17:42.818 "data_offset": 2048, 00:17:42.818 "data_size": 63488 00:17:42.818 }, 00:17:42.818 { 00:17:42.818 "name": "BaseBdev3", 00:17:42.818 "uuid": "87a440db-7250-4fc6-b522-b53703f538f5", 00:17:42.818 "is_configured": true, 00:17:42.818 "data_offset": 2048, 00:17:42.818 "data_size": 63488 00:17:42.818 } 00:17:42.818 ] 00:17:42.818 }' 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.818 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4d777ba2-109d-40b3-acff-64d6a552d686 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.462 [2024-11-15 10:46:13.948741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:43.462 [2024-11-15 10:46:13.949019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:43.462 [2024-11-15 10:46:13.949054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:43.462 NewBaseBdev 00:17:43.462 [2024-11-15 10:46:13.949384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.462 [2024-11-15 10:46:13.954376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:43.462 [2024-11-15 10:46:13.954546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:43.462 [2024-11-15 10:46:13.954909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.462 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.462 [ 00:17:43.462 { 00:17:43.462 "name": "NewBaseBdev", 00:17:43.462 "aliases": [ 00:17:43.462 "4d777ba2-109d-40b3-acff-64d6a552d686" 00:17:43.462 ], 00:17:43.462 "product_name": "Malloc disk", 00:17:43.462 "block_size": 512, 00:17:43.462 "num_blocks": 65536, 00:17:43.462 "uuid": "4d777ba2-109d-40b3-acff-64d6a552d686", 00:17:43.462 "assigned_rate_limits": { 00:17:43.462 "rw_ios_per_sec": 0, 00:17:43.462 "rw_mbytes_per_sec": 0, 00:17:43.462 "r_mbytes_per_sec": 0, 00:17:43.462 "w_mbytes_per_sec": 0 00:17:43.462 }, 00:17:43.462 "claimed": true, 00:17:43.462 "claim_type": "exclusive_write", 00:17:43.462 "zoned": false, 00:17:43.462 "supported_io_types": { 00:17:43.462 "read": true, 00:17:43.462 "write": true, 00:17:43.462 "unmap": true, 00:17:43.462 "flush": true, 00:17:43.462 "reset": true, 00:17:43.462 "nvme_admin": false, 00:17:43.462 "nvme_io": false, 00:17:43.462 "nvme_io_md": false, 00:17:43.462 "write_zeroes": true, 00:17:43.462 "zcopy": true, 00:17:43.462 "get_zone_info": false, 00:17:43.462 "zone_management": false, 00:17:43.462 "zone_append": false, 00:17:43.462 "compare": false, 00:17:43.462 "compare_and_write": false, 00:17:43.462 "abort": true, 00:17:43.462 "seek_hole": false, 00:17:43.462 "seek_data": false, 00:17:43.462 "copy": true, 00:17:43.462 "nvme_iov_md": false 00:17:43.462 }, 00:17:43.462 "memory_domains": [ 00:17:43.462 { 00:17:43.462 "dma_device_id": "system", 00:17:43.462 "dma_device_type": 1 00:17:43.462 }, 00:17:43.462 { 00:17:43.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.463 "dma_device_type": 2 00:17:43.463 } 00:17:43.463 ], 00:17:43.463 "driver_specific": {} 00:17:43.463 } 00:17:43.463 ] 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.463 10:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.744 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.744 "name": "Existed_Raid", 00:17:43.744 "uuid": "17b45923-d1e3-4efe-8245-bc26895a5efb", 00:17:43.744 "strip_size_kb": 64, 00:17:43.744 "state": "online", 00:17:43.744 "raid_level": "raid5f", 00:17:43.744 "superblock": true, 00:17:43.744 "num_base_bdevs": 3, 00:17:43.744 "num_base_bdevs_discovered": 3, 00:17:43.744 "num_base_bdevs_operational": 3, 00:17:43.744 "base_bdevs_list": [ 00:17:43.744 { 00:17:43.744 "name": "NewBaseBdev", 00:17:43.744 "uuid": "4d777ba2-109d-40b3-acff-64d6a552d686", 00:17:43.744 "is_configured": true, 00:17:43.744 "data_offset": 2048, 00:17:43.744 "data_size": 63488 00:17:43.744 }, 00:17:43.744 { 00:17:43.744 "name": "BaseBdev2", 00:17:43.744 "uuid": "a6264782-5ce6-448f-a76b-63e21f711e64", 00:17:43.744 "is_configured": true, 00:17:43.744 "data_offset": 2048, 00:17:43.744 "data_size": 63488 00:17:43.744 }, 00:17:43.744 { 00:17:43.744 "name": "BaseBdev3", 00:17:43.744 "uuid": "87a440db-7250-4fc6-b522-b53703f538f5", 00:17:43.744 "is_configured": true, 00:17:43.744 "data_offset": 2048, 00:17:43.744 "data_size": 63488 00:17:43.744 } 00:17:43.744 ] 00:17:43.744 }' 00:17:43.744 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.744 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.002 [2024-11-15 10:46:14.536723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.002 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.261 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:44.261 "name": "Existed_Raid", 00:17:44.261 "aliases": [ 00:17:44.261 "17b45923-d1e3-4efe-8245-bc26895a5efb" 00:17:44.261 ], 00:17:44.261 "product_name": "Raid Volume", 00:17:44.261 "block_size": 512, 00:17:44.261 "num_blocks": 126976, 00:17:44.261 "uuid": "17b45923-d1e3-4efe-8245-bc26895a5efb", 00:17:44.261 "assigned_rate_limits": { 00:17:44.261 "rw_ios_per_sec": 0, 00:17:44.261 "rw_mbytes_per_sec": 0, 00:17:44.261 "r_mbytes_per_sec": 0, 00:17:44.261 "w_mbytes_per_sec": 0 00:17:44.261 }, 00:17:44.261 "claimed": false, 00:17:44.261 "zoned": false, 00:17:44.261 "supported_io_types": { 00:17:44.261 "read": true, 00:17:44.261 "write": true, 00:17:44.261 "unmap": false, 00:17:44.261 "flush": false, 00:17:44.261 "reset": true, 00:17:44.261 "nvme_admin": false, 00:17:44.261 "nvme_io": false, 00:17:44.261 "nvme_io_md": false, 00:17:44.261 "write_zeroes": true, 00:17:44.261 "zcopy": false, 00:17:44.261 "get_zone_info": false, 00:17:44.261 "zone_management": false, 00:17:44.261 "zone_append": false, 00:17:44.261 "compare": false, 00:17:44.261 "compare_and_write": false, 00:17:44.261 "abort": false, 00:17:44.261 "seek_hole": false, 00:17:44.261 "seek_data": false, 00:17:44.261 "copy": false, 00:17:44.261 "nvme_iov_md": false 00:17:44.261 }, 00:17:44.261 "driver_specific": { 00:17:44.261 "raid": { 00:17:44.261 "uuid": "17b45923-d1e3-4efe-8245-bc26895a5efb", 00:17:44.261 "strip_size_kb": 64, 00:17:44.261 "state": "online", 00:17:44.261 "raid_level": "raid5f", 00:17:44.261 "superblock": true, 00:17:44.261 "num_base_bdevs": 3, 00:17:44.262 "num_base_bdevs_discovered": 3, 00:17:44.262 "num_base_bdevs_operational": 3, 00:17:44.262 "base_bdevs_list": [ 00:17:44.262 { 00:17:44.262 "name": "NewBaseBdev", 00:17:44.262 "uuid": "4d777ba2-109d-40b3-acff-64d6a552d686", 00:17:44.262 "is_configured": true, 00:17:44.262 "data_offset": 2048, 00:17:44.262 "data_size": 63488 00:17:44.262 }, 00:17:44.262 { 00:17:44.262 "name": "BaseBdev2", 00:17:44.262 "uuid": "a6264782-5ce6-448f-a76b-63e21f711e64", 00:17:44.262 "is_configured": true, 00:17:44.262 "data_offset": 2048, 00:17:44.262 "data_size": 63488 00:17:44.262 }, 00:17:44.262 { 00:17:44.262 "name": "BaseBdev3", 00:17:44.262 "uuid": "87a440db-7250-4fc6-b522-b53703f538f5", 00:17:44.262 "is_configured": true, 00:17:44.262 "data_offset": 2048, 00:17:44.262 "data_size": 63488 00:17:44.262 } 00:17:44.262 ] 00:17:44.262 } 00:17:44.262 } 00:17:44.262 }' 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:44.262 BaseBdev2 00:17:44.262 BaseBdev3' 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.262 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.520 [2024-11-15 10:46:14.872573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.520 [2024-11-15 10:46:14.872741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.520 [2024-11-15 10:46:14.872865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.520 [2024-11-15 10:46:14.873230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.520 [2024-11-15 10:46:14.873258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81015 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 81015 ']' 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 81015 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81015 00:17:44.520 killing process with pid 81015 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81015' 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 81015 00:17:44.520 [2024-11-15 10:46:14.909599] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.520 10:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 81015 00:17:44.777 [2024-11-15 10:46:15.166875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.712 ************************************ 00:17:45.712 END TEST raid5f_state_function_test_sb 00:17:45.712 ************************************ 00:17:45.712 10:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:45.712 00:17:45.712 real 0m11.731s 00:17:45.712 user 0m19.732s 00:17:45.712 sys 0m1.469s 00:17:45.712 10:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:45.712 10:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.712 10:46:16 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:17:45.712 10:46:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:45.712 10:46:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:45.712 10:46:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.712 ************************************ 00:17:45.712 START TEST raid5f_superblock_test 00:17:45.712 ************************************ 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81653 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81653 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81653 ']' 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:45.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:45.712 10:46:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.971 [2024-11-15 10:46:16.321108] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:17:45.971 [2024-11-15 10:46:16.321538] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81653 ] 00:17:45.971 [2024-11-15 10:46:16.504913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.230 [2024-11-15 10:46:16.666750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.490 [2024-11-15 10:46:16.851026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.490 [2024-11-15 10:46:16.851104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.058 malloc1 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.058 [2024-11-15 10:46:17.377826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:47.058 [2024-11-15 10:46:17.378064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.058 [2024-11-15 10:46:17.378162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:47.058 [2024-11-15 10:46:17.378415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.058 [2024-11-15 10:46:17.381769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.058 [2024-11-15 10:46:17.381937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:47.058 pt1 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.058 malloc2 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.058 [2024-11-15 10:46:17.431582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.058 [2024-11-15 10:46:17.431786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.058 [2024-11-15 10:46:17.431872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:47.058 [2024-11-15 10:46:17.432102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.058 [2024-11-15 10:46:17.434927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.058 [2024-11-15 10:46:17.435110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.058 pt2 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.058 malloc3 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.058 [2024-11-15 10:46:17.489832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:47.058 [2024-11-15 10:46:17.490029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.058 [2024-11-15 10:46:17.490077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:47.058 [2024-11-15 10:46:17.490094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.058 [2024-11-15 10:46:17.492727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.058 [2024-11-15 10:46:17.492779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:47.058 pt3 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:47.058 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.059 [2024-11-15 10:46:17.497895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:47.059 [2024-11-15 10:46:17.500368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.059 [2024-11-15 10:46:17.500482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:47.059 [2024-11-15 10:46:17.500749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:47.059 [2024-11-15 10:46:17.500782] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:47.059 [2024-11-15 10:46:17.501095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:47.059 [2024-11-15 10:46:17.506258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:47.059 [2024-11-15 10:46:17.506295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:47.059 [2024-11-15 10:46:17.506615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.059 "name": "raid_bdev1", 00:17:47.059 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:47.059 "strip_size_kb": 64, 00:17:47.059 "state": "online", 00:17:47.059 "raid_level": "raid5f", 00:17:47.059 "superblock": true, 00:17:47.059 "num_base_bdevs": 3, 00:17:47.059 "num_base_bdevs_discovered": 3, 00:17:47.059 "num_base_bdevs_operational": 3, 00:17:47.059 "base_bdevs_list": [ 00:17:47.059 { 00:17:47.059 "name": "pt1", 00:17:47.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.059 "is_configured": true, 00:17:47.059 "data_offset": 2048, 00:17:47.059 "data_size": 63488 00:17:47.059 }, 00:17:47.059 { 00:17:47.059 "name": "pt2", 00:17:47.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.059 "is_configured": true, 00:17:47.059 "data_offset": 2048, 00:17:47.059 "data_size": 63488 00:17:47.059 }, 00:17:47.059 { 00:17:47.059 "name": "pt3", 00:17:47.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:47.059 "is_configured": true, 00:17:47.059 "data_offset": 2048, 00:17:47.059 "data_size": 63488 00:17:47.059 } 00:17:47.059 ] 00:17:47.059 }' 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.059 10:46:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.627 [2024-11-15 10:46:18.012232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:47.627 "name": "raid_bdev1", 00:17:47.627 "aliases": [ 00:17:47.627 "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f" 00:17:47.627 ], 00:17:47.627 "product_name": "Raid Volume", 00:17:47.627 "block_size": 512, 00:17:47.627 "num_blocks": 126976, 00:17:47.627 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:47.627 "assigned_rate_limits": { 00:17:47.627 "rw_ios_per_sec": 0, 00:17:47.627 "rw_mbytes_per_sec": 0, 00:17:47.627 "r_mbytes_per_sec": 0, 00:17:47.627 "w_mbytes_per_sec": 0 00:17:47.627 }, 00:17:47.627 "claimed": false, 00:17:47.627 "zoned": false, 00:17:47.627 "supported_io_types": { 00:17:47.627 "read": true, 00:17:47.627 "write": true, 00:17:47.627 "unmap": false, 00:17:47.627 "flush": false, 00:17:47.627 "reset": true, 00:17:47.627 "nvme_admin": false, 00:17:47.627 "nvme_io": false, 00:17:47.627 "nvme_io_md": false, 00:17:47.627 "write_zeroes": true, 00:17:47.627 "zcopy": false, 00:17:47.627 "get_zone_info": false, 00:17:47.627 "zone_management": false, 00:17:47.627 "zone_append": false, 00:17:47.627 "compare": false, 00:17:47.627 "compare_and_write": false, 00:17:47.627 "abort": false, 00:17:47.627 "seek_hole": false, 00:17:47.627 "seek_data": false, 00:17:47.627 "copy": false, 00:17:47.627 "nvme_iov_md": false 00:17:47.627 }, 00:17:47.627 "driver_specific": { 00:17:47.627 "raid": { 00:17:47.627 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:47.627 "strip_size_kb": 64, 00:17:47.627 "state": "online", 00:17:47.627 "raid_level": "raid5f", 00:17:47.627 "superblock": true, 00:17:47.627 "num_base_bdevs": 3, 00:17:47.627 "num_base_bdevs_discovered": 3, 00:17:47.627 "num_base_bdevs_operational": 3, 00:17:47.627 "base_bdevs_list": [ 00:17:47.627 { 00:17:47.627 "name": "pt1", 00:17:47.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.627 "is_configured": true, 00:17:47.627 "data_offset": 2048, 00:17:47.627 "data_size": 63488 00:17:47.627 }, 00:17:47.627 { 00:17:47.627 "name": "pt2", 00:17:47.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.627 "is_configured": true, 00:17:47.627 "data_offset": 2048, 00:17:47.627 "data_size": 63488 00:17:47.627 }, 00:17:47.627 { 00:17:47.627 "name": "pt3", 00:17:47.627 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:47.627 "is_configured": true, 00:17:47.627 "data_offset": 2048, 00:17:47.627 "data_size": 63488 00:17:47.627 } 00:17:47.627 ] 00:17:47.627 } 00:17:47.627 } 00:17:47.627 }' 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:47.627 pt2 00:17:47.627 pt3' 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:47.627 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.628 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.628 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:47.628 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.628 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.628 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.628 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:47.887 [2024-11-15 10:46:18.284325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=909ccf2e-aa5f-4084-9cd1-82eff0f1c72f 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 909ccf2e-aa5f-4084-9cd1-82eff0f1c72f ']' 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.887 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.887 [2024-11-15 10:46:18.336229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:47.887 [2024-11-15 10:46:18.336590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.887 [2024-11-15 10:46:18.336791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.887 [2024-11-15 10:46:18.337003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.888 [2024-11-15 10:46:18.337041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.888 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.147 [2024-11-15 10:46:18.484241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:48.147 [2024-11-15 10:46:18.486828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:48.147 [2024-11-15 10:46:18.486911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:48.147 [2024-11-15 10:46:18.487010] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:48.147 [2024-11-15 10:46:18.487089] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:48.147 [2024-11-15 10:46:18.487125] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:48.147 [2024-11-15 10:46:18.487154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.147 [2024-11-15 10:46:18.487169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:48.147 request: 00:17:48.147 { 00:17:48.147 "name": "raid_bdev1", 00:17:48.147 "raid_level": "raid5f", 00:17:48.147 "base_bdevs": [ 00:17:48.147 "malloc1", 00:17:48.147 "malloc2", 00:17:48.147 "malloc3" 00:17:48.147 ], 00:17:48.147 "strip_size_kb": 64, 00:17:48.147 "superblock": false, 00:17:48.147 "method": "bdev_raid_create", 00:17:48.147 "req_id": 1 00:17:48.147 } 00:17:48.147 Got JSON-RPC error response 00:17:48.147 response: 00:17:48.147 { 00:17:48.147 "code": -17, 00:17:48.147 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:48.147 } 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.147 [2024-11-15 10:46:18.560136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:48.147 [2024-11-15 10:46:18.560343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.147 [2024-11-15 10:46:18.560400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:48.147 [2024-11-15 10:46:18.560417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.147 [2024-11-15 10:46:18.563097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.147 [2024-11-15 10:46:18.563145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:48.147 [2024-11-15 10:46:18.563258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:48.147 [2024-11-15 10:46:18.563325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.147 pt1 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.147 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.147 "name": "raid_bdev1", 00:17:48.147 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:48.147 "strip_size_kb": 64, 00:17:48.147 "state": "configuring", 00:17:48.147 "raid_level": "raid5f", 00:17:48.147 "superblock": true, 00:17:48.147 "num_base_bdevs": 3, 00:17:48.147 "num_base_bdevs_discovered": 1, 00:17:48.147 "num_base_bdevs_operational": 3, 00:17:48.147 "base_bdevs_list": [ 00:17:48.147 { 00:17:48.147 "name": "pt1", 00:17:48.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.147 "is_configured": true, 00:17:48.147 "data_offset": 2048, 00:17:48.147 "data_size": 63488 00:17:48.147 }, 00:17:48.147 { 00:17:48.147 "name": null, 00:17:48.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.147 "is_configured": false, 00:17:48.147 "data_offset": 2048, 00:17:48.147 "data_size": 63488 00:17:48.147 }, 00:17:48.147 { 00:17:48.147 "name": null, 00:17:48.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:48.147 "is_configured": false, 00:17:48.148 "data_offset": 2048, 00:17:48.148 "data_size": 63488 00:17:48.148 } 00:17:48.148 ] 00:17:48.148 }' 00:17:48.148 10:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.148 10:46:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.714 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:48.714 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:48.714 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.714 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.714 [2024-11-15 10:46:19.072274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:48.714 [2024-11-15 10:46:19.072367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.714 [2024-11-15 10:46:19.072403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:48.714 [2024-11-15 10:46:19.072419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.714 [2024-11-15 10:46:19.072950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.714 [2024-11-15 10:46:19.073002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:48.714 [2024-11-15 10:46:19.073113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:48.714 [2024-11-15 10:46:19.073153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.714 pt2 00:17:48.714 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.714 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:48.714 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.714 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.714 [2024-11-15 10:46:19.080261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:48.714 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.714 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.715 "name": "raid_bdev1", 00:17:48.715 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:48.715 "strip_size_kb": 64, 00:17:48.715 "state": "configuring", 00:17:48.715 "raid_level": "raid5f", 00:17:48.715 "superblock": true, 00:17:48.715 "num_base_bdevs": 3, 00:17:48.715 "num_base_bdevs_discovered": 1, 00:17:48.715 "num_base_bdevs_operational": 3, 00:17:48.715 "base_bdevs_list": [ 00:17:48.715 { 00:17:48.715 "name": "pt1", 00:17:48.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.715 "is_configured": true, 00:17:48.715 "data_offset": 2048, 00:17:48.715 "data_size": 63488 00:17:48.715 }, 00:17:48.715 { 00:17:48.715 "name": null, 00:17:48.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.715 "is_configured": false, 00:17:48.715 "data_offset": 0, 00:17:48.715 "data_size": 63488 00:17:48.715 }, 00:17:48.715 { 00:17:48.715 "name": null, 00:17:48.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:48.715 "is_configured": false, 00:17:48.715 "data_offset": 2048, 00:17:48.715 "data_size": 63488 00:17:48.715 } 00:17:48.715 ] 00:17:48.715 }' 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.715 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.283 [2024-11-15 10:46:19.616426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:49.283 [2024-11-15 10:46:19.616571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.283 [2024-11-15 10:46:19.616638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:49.283 [2024-11-15 10:46:19.616710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.283 [2024-11-15 10:46:19.617290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.283 [2024-11-15 10:46:19.617478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:49.283 [2024-11-15 10:46:19.617720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:49.283 [2024-11-15 10:46:19.617770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:49.283 pt2 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.283 [2024-11-15 10:46:19.628425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:49.283 [2024-11-15 10:46:19.628486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.283 [2024-11-15 10:46:19.628509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:49.283 [2024-11-15 10:46:19.628524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.283 [2024-11-15 10:46:19.628967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.283 [2024-11-15 10:46:19.629017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:49.283 [2024-11-15 10:46:19.629099] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:49.283 [2024-11-15 10:46:19.629134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:49.283 [2024-11-15 10:46:19.629297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:49.283 [2024-11-15 10:46:19.629326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:49.283 [2024-11-15 10:46:19.629651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:49.283 [2024-11-15 10:46:19.634464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:49.283 [2024-11-15 10:46:19.634489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:49.283 [2024-11-15 10:46:19.634711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.283 pt3 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.283 "name": "raid_bdev1", 00:17:49.283 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:49.283 "strip_size_kb": 64, 00:17:49.283 "state": "online", 00:17:49.283 "raid_level": "raid5f", 00:17:49.283 "superblock": true, 00:17:49.283 "num_base_bdevs": 3, 00:17:49.283 "num_base_bdevs_discovered": 3, 00:17:49.283 "num_base_bdevs_operational": 3, 00:17:49.283 "base_bdevs_list": [ 00:17:49.283 { 00:17:49.283 "name": "pt1", 00:17:49.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.283 "is_configured": true, 00:17:49.283 "data_offset": 2048, 00:17:49.283 "data_size": 63488 00:17:49.283 }, 00:17:49.283 { 00:17:49.283 "name": "pt2", 00:17:49.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.283 "is_configured": true, 00:17:49.283 "data_offset": 2048, 00:17:49.283 "data_size": 63488 00:17:49.283 }, 00:17:49.283 { 00:17:49.283 "name": "pt3", 00:17:49.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:49.283 "is_configured": true, 00:17:49.283 "data_offset": 2048, 00:17:49.283 "data_size": 63488 00:17:49.283 } 00:17:49.283 ] 00:17:49.283 }' 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.283 10:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.852 [2024-11-15 10:46:20.116371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:49.852 "name": "raid_bdev1", 00:17:49.852 "aliases": [ 00:17:49.852 "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f" 00:17:49.852 ], 00:17:49.852 "product_name": "Raid Volume", 00:17:49.852 "block_size": 512, 00:17:49.852 "num_blocks": 126976, 00:17:49.852 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:49.852 "assigned_rate_limits": { 00:17:49.852 "rw_ios_per_sec": 0, 00:17:49.852 "rw_mbytes_per_sec": 0, 00:17:49.852 "r_mbytes_per_sec": 0, 00:17:49.852 "w_mbytes_per_sec": 0 00:17:49.852 }, 00:17:49.852 "claimed": false, 00:17:49.852 "zoned": false, 00:17:49.852 "supported_io_types": { 00:17:49.852 "read": true, 00:17:49.852 "write": true, 00:17:49.852 "unmap": false, 00:17:49.852 "flush": false, 00:17:49.852 "reset": true, 00:17:49.852 "nvme_admin": false, 00:17:49.852 "nvme_io": false, 00:17:49.852 "nvme_io_md": false, 00:17:49.852 "write_zeroes": true, 00:17:49.852 "zcopy": false, 00:17:49.852 "get_zone_info": false, 00:17:49.852 "zone_management": false, 00:17:49.852 "zone_append": false, 00:17:49.852 "compare": false, 00:17:49.852 "compare_and_write": false, 00:17:49.852 "abort": false, 00:17:49.852 "seek_hole": false, 00:17:49.852 "seek_data": false, 00:17:49.852 "copy": false, 00:17:49.852 "nvme_iov_md": false 00:17:49.852 }, 00:17:49.852 "driver_specific": { 00:17:49.852 "raid": { 00:17:49.852 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:49.852 "strip_size_kb": 64, 00:17:49.852 "state": "online", 00:17:49.852 "raid_level": "raid5f", 00:17:49.852 "superblock": true, 00:17:49.852 "num_base_bdevs": 3, 00:17:49.852 "num_base_bdevs_discovered": 3, 00:17:49.852 "num_base_bdevs_operational": 3, 00:17:49.852 "base_bdevs_list": [ 00:17:49.852 { 00:17:49.852 "name": "pt1", 00:17:49.852 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.852 "is_configured": true, 00:17:49.852 "data_offset": 2048, 00:17:49.852 "data_size": 63488 00:17:49.852 }, 00:17:49.852 { 00:17:49.852 "name": "pt2", 00:17:49.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.852 "is_configured": true, 00:17:49.852 "data_offset": 2048, 00:17:49.852 "data_size": 63488 00:17:49.852 }, 00:17:49.852 { 00:17:49.852 "name": "pt3", 00:17:49.852 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:49.852 "is_configured": true, 00:17:49.852 "data_offset": 2048, 00:17:49.852 "data_size": 63488 00:17:49.852 } 00:17:49.852 ] 00:17:49.852 } 00:17:49.852 } 00:17:49.852 }' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:49.852 pt2 00:17:49.852 pt3' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.852 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:49.852 [2024-11-15 10:46:20.404287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 909ccf2e-aa5f-4084-9cd1-82eff0f1c72f '!=' 909ccf2e-aa5f-4084-9cd1-82eff0f1c72f ']' 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.111 [2024-11-15 10:46:20.460139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.111 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.111 "name": "raid_bdev1", 00:17:50.111 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:50.111 "strip_size_kb": 64, 00:17:50.111 "state": "online", 00:17:50.111 "raid_level": "raid5f", 00:17:50.111 "superblock": true, 00:17:50.111 "num_base_bdevs": 3, 00:17:50.111 "num_base_bdevs_discovered": 2, 00:17:50.111 "num_base_bdevs_operational": 2, 00:17:50.111 "base_bdevs_list": [ 00:17:50.111 { 00:17:50.111 "name": null, 00:17:50.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.111 "is_configured": false, 00:17:50.111 "data_offset": 0, 00:17:50.111 "data_size": 63488 00:17:50.111 }, 00:17:50.111 { 00:17:50.111 "name": "pt2", 00:17:50.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.111 "is_configured": true, 00:17:50.111 "data_offset": 2048, 00:17:50.111 "data_size": 63488 00:17:50.111 }, 00:17:50.111 { 00:17:50.111 "name": "pt3", 00:17:50.111 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.111 "is_configured": true, 00:17:50.111 "data_offset": 2048, 00:17:50.111 "data_size": 63488 00:17:50.111 } 00:17:50.111 ] 00:17:50.111 }' 00:17:50.112 10:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.112 10:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.694 [2024-11-15 10:46:21.008238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.694 [2024-11-15 10:46:21.008277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.694 [2024-11-15 10:46:21.008401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.694 [2024-11-15 10:46:21.008491] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.694 [2024-11-15 10:46:21.008513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.694 [2024-11-15 10:46:21.104235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.694 [2024-11-15 10:46:21.104315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.694 [2024-11-15 10:46:21.104342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:50.694 [2024-11-15 10:46:21.104374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.694 [2024-11-15 10:46:21.107024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.694 [2024-11-15 10:46:21.107213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.694 [2024-11-15 10:46:21.107336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:50.694 [2024-11-15 10:46:21.107422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.694 pt2 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.694 "name": "raid_bdev1", 00:17:50.694 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:50.694 "strip_size_kb": 64, 00:17:50.694 "state": "configuring", 00:17:50.694 "raid_level": "raid5f", 00:17:50.694 "superblock": true, 00:17:50.694 "num_base_bdevs": 3, 00:17:50.694 "num_base_bdevs_discovered": 1, 00:17:50.694 "num_base_bdevs_operational": 2, 00:17:50.694 "base_bdevs_list": [ 00:17:50.694 { 00:17:50.694 "name": null, 00:17:50.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.694 "is_configured": false, 00:17:50.694 "data_offset": 2048, 00:17:50.694 "data_size": 63488 00:17:50.694 }, 00:17:50.694 { 00:17:50.694 "name": "pt2", 00:17:50.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.694 "is_configured": true, 00:17:50.694 "data_offset": 2048, 00:17:50.694 "data_size": 63488 00:17:50.694 }, 00:17:50.694 { 00:17:50.694 "name": null, 00:17:50.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.694 "is_configured": false, 00:17:50.694 "data_offset": 2048, 00:17:50.694 "data_size": 63488 00:17:50.694 } 00:17:50.694 ] 00:17:50.694 }' 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.694 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.262 [2024-11-15 10:46:21.620391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:51.262 [2024-11-15 10:46:21.620616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.262 [2024-11-15 10:46:21.620801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:51.262 [2024-11-15 10:46:21.620929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.262 [2024-11-15 10:46:21.621552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.262 [2024-11-15 10:46:21.621604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:51.262 [2024-11-15 10:46:21.621705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:51.262 [2024-11-15 10:46:21.621745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:51.262 [2024-11-15 10:46:21.621903] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:51.262 [2024-11-15 10:46:21.621924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:51.262 [2024-11-15 10:46:21.622223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:51.262 [2024-11-15 10:46:21.627075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:51.262 [2024-11-15 10:46:21.627099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:51.262 pt3 00:17:51.262 [2024-11-15 10:46:21.627488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.262 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.263 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.263 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.263 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.263 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.263 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.263 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.263 "name": "raid_bdev1", 00:17:51.263 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:51.263 "strip_size_kb": 64, 00:17:51.263 "state": "online", 00:17:51.263 "raid_level": "raid5f", 00:17:51.263 "superblock": true, 00:17:51.263 "num_base_bdevs": 3, 00:17:51.263 "num_base_bdevs_discovered": 2, 00:17:51.263 "num_base_bdevs_operational": 2, 00:17:51.263 "base_bdevs_list": [ 00:17:51.263 { 00:17:51.263 "name": null, 00:17:51.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.263 "is_configured": false, 00:17:51.263 "data_offset": 2048, 00:17:51.263 "data_size": 63488 00:17:51.263 }, 00:17:51.263 { 00:17:51.263 "name": "pt2", 00:17:51.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.263 "is_configured": true, 00:17:51.263 "data_offset": 2048, 00:17:51.263 "data_size": 63488 00:17:51.263 }, 00:17:51.263 { 00:17:51.263 "name": "pt3", 00:17:51.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:51.263 "is_configured": true, 00:17:51.263 "data_offset": 2048, 00:17:51.263 "data_size": 63488 00:17:51.263 } 00:17:51.263 ] 00:17:51.263 }' 00:17:51.263 10:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.263 10:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.830 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:51.830 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.831 [2024-11-15 10:46:22.148706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.831 [2024-11-15 10:46:22.148872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.831 [2024-11-15 10:46:22.148984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.831 [2024-11-15 10:46:22.149068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.831 [2024-11-15 10:46:22.149084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.831 [2024-11-15 10:46:22.220725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.831 [2024-11-15 10:46:22.220798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.831 [2024-11-15 10:46:22.220827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:51.831 [2024-11-15 10:46:22.220841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.831 [2024-11-15 10:46:22.223509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.831 [2024-11-15 10:46:22.223678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.831 [2024-11-15 10:46:22.223814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:51.831 [2024-11-15 10:46:22.223890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:51.831 [2024-11-15 10:46:22.224095] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:51.831 [2024-11-15 10:46:22.224115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.831 [2024-11-15 10:46:22.224148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:51.831 [2024-11-15 10:46:22.224218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.831 pt1 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.831 "name": "raid_bdev1", 00:17:51.831 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:51.831 "strip_size_kb": 64, 00:17:51.831 "state": "configuring", 00:17:51.831 "raid_level": "raid5f", 00:17:51.831 "superblock": true, 00:17:51.831 "num_base_bdevs": 3, 00:17:51.831 "num_base_bdevs_discovered": 1, 00:17:51.831 "num_base_bdevs_operational": 2, 00:17:51.831 "base_bdevs_list": [ 00:17:51.831 { 00:17:51.831 "name": null, 00:17:51.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.831 "is_configured": false, 00:17:51.831 "data_offset": 2048, 00:17:51.831 "data_size": 63488 00:17:51.831 }, 00:17:51.831 { 00:17:51.831 "name": "pt2", 00:17:51.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.831 "is_configured": true, 00:17:51.831 "data_offset": 2048, 00:17:51.831 "data_size": 63488 00:17:51.831 }, 00:17:51.831 { 00:17:51.831 "name": null, 00:17:51.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:51.831 "is_configured": false, 00:17:51.831 "data_offset": 2048, 00:17:51.831 "data_size": 63488 00:17:51.831 } 00:17:51.831 ] 00:17:51.831 }' 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.831 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.397 [2024-11-15 10:46:22.824902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:52.397 [2024-11-15 10:46:22.824980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.397 [2024-11-15 10:46:22.825011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:52.397 [2024-11-15 10:46:22.825026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.397 [2024-11-15 10:46:22.825617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.397 [2024-11-15 10:46:22.825659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:52.397 [2024-11-15 10:46:22.825781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:52.397 [2024-11-15 10:46:22.825828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:52.397 [2024-11-15 10:46:22.826000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:52.397 [2024-11-15 10:46:22.826039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:52.397 [2024-11-15 10:46:22.826397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:52.397 [2024-11-15 10:46:22.831553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:52.397 [2024-11-15 10:46:22.831707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:52.397 [2024-11-15 10:46:22.832226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.397 pt3 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.397 "name": "raid_bdev1", 00:17:52.397 "uuid": "909ccf2e-aa5f-4084-9cd1-82eff0f1c72f", 00:17:52.397 "strip_size_kb": 64, 00:17:52.397 "state": "online", 00:17:52.397 "raid_level": "raid5f", 00:17:52.397 "superblock": true, 00:17:52.397 "num_base_bdevs": 3, 00:17:52.397 "num_base_bdevs_discovered": 2, 00:17:52.397 "num_base_bdevs_operational": 2, 00:17:52.397 "base_bdevs_list": [ 00:17:52.397 { 00:17:52.397 "name": null, 00:17:52.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.397 "is_configured": false, 00:17:52.397 "data_offset": 2048, 00:17:52.397 "data_size": 63488 00:17:52.397 }, 00:17:52.397 { 00:17:52.397 "name": "pt2", 00:17:52.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.397 "is_configured": true, 00:17:52.397 "data_offset": 2048, 00:17:52.397 "data_size": 63488 00:17:52.397 }, 00:17:52.397 { 00:17:52.397 "name": "pt3", 00:17:52.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:52.397 "is_configured": true, 00:17:52.397 "data_offset": 2048, 00:17:52.397 "data_size": 63488 00:17:52.397 } 00:17:52.397 ] 00:17:52.397 }' 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.397 10:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.964 [2024-11-15 10:46:23.442581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 909ccf2e-aa5f-4084-9cd1-82eff0f1c72f '!=' 909ccf2e-aa5f-4084-9cd1-82eff0f1c72f ']' 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81653 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81653 ']' 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81653 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:52.964 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81653 00:17:53.223 killing process with pid 81653 00:17:53.223 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:53.223 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:53.223 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81653' 00:17:53.223 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81653 00:17:53.223 10:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81653 00:17:53.223 [2024-11-15 10:46:23.538901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.223 [2024-11-15 10:46:23.539186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.223 [2024-11-15 10:46:23.539332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.223 [2024-11-15 10:46:23.539382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:53.482 [2024-11-15 10:46:23.813927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.421 10:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:54.421 00:17:54.421 real 0m8.594s 00:17:54.421 user 0m14.159s 00:17:54.421 sys 0m1.136s 00:17:54.421 10:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:54.421 ************************************ 00:17:54.421 END TEST raid5f_superblock_test 00:17:54.421 ************************************ 00:17:54.421 10:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.421 10:46:24 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:54.421 10:46:24 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:54.421 10:46:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:54.421 10:46:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:54.421 10:46:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.421 ************************************ 00:17:54.421 START TEST raid5f_rebuild_test 00:17:54.421 ************************************ 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82099 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82099 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 82099 ']' 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:54.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:54.421 10:46:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.421 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:54.421 Zero copy mechanism will not be used. 00:17:54.421 [2024-11-15 10:46:24.974603] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:17:54.421 [2024-11-15 10:46:24.974775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82099 ] 00:17:54.681 [2024-11-15 10:46:25.159550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.940 [2024-11-15 10:46:25.286114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.940 [2024-11-15 10:46:25.481390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.940 [2024-11-15 10:46:25.481462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.508 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:55.508 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:17:55.508 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:55.508 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:55.508 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.508 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.767 BaseBdev1_malloc 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 [2024-11-15 10:46:26.072674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:55.768 [2024-11-15 10:46:26.072756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.768 [2024-11-15 10:46:26.072789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:55.768 [2024-11-15 10:46:26.072807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.768 [2024-11-15 10:46:26.075540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.768 [2024-11-15 10:46:26.075765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:55.768 BaseBdev1 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 BaseBdev2_malloc 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 [2024-11-15 10:46:26.120766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:55.768 [2024-11-15 10:46:26.120979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.768 [2024-11-15 10:46:26.121023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:55.768 [2024-11-15 10:46:26.121042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.768 [2024-11-15 10:46:26.123640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.768 [2024-11-15 10:46:26.123692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:55.768 BaseBdev2 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 BaseBdev3_malloc 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 [2024-11-15 10:46:26.183299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:55.768 [2024-11-15 10:46:26.183555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.768 [2024-11-15 10:46:26.183599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:55.768 [2024-11-15 10:46:26.183619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.768 [2024-11-15 10:46:26.186214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.768 [2024-11-15 10:46:26.186268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:55.768 BaseBdev3 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 spare_malloc 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 spare_delay 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 [2024-11-15 10:46:26.243455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:55.768 [2024-11-15 10:46:26.243558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.768 [2024-11-15 10:46:26.243594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:55.768 [2024-11-15 10:46:26.243612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.768 [2024-11-15 10:46:26.246422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.768 [2024-11-15 10:46:26.246478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:55.768 spare 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 [2024-11-15 10:46:26.255491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.768 [2024-11-15 10:46:26.257722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.768 [2024-11-15 10:46:26.257956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:55.768 [2024-11-15 10:46:26.258097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:55.768 [2024-11-15 10:46:26.258117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:55.768 [2024-11-15 10:46:26.258485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:55.768 [2024-11-15 10:46:26.263832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:55.768 [2024-11-15 10:46:26.263975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:55.768 [2024-11-15 10:46:26.264396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.768 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.768 "name": "raid_bdev1", 00:17:55.768 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:17:55.768 "strip_size_kb": 64, 00:17:55.768 "state": "online", 00:17:55.768 "raid_level": "raid5f", 00:17:55.768 "superblock": false, 00:17:55.768 "num_base_bdevs": 3, 00:17:55.768 "num_base_bdevs_discovered": 3, 00:17:55.768 "num_base_bdevs_operational": 3, 00:17:55.768 "base_bdevs_list": [ 00:17:55.768 { 00:17:55.768 "name": "BaseBdev1", 00:17:55.768 "uuid": "7a0014a2-dbe1-5597-8529-58c6a9343e71", 00:17:55.768 "is_configured": true, 00:17:55.768 "data_offset": 0, 00:17:55.769 "data_size": 65536 00:17:55.769 }, 00:17:55.769 { 00:17:55.769 "name": "BaseBdev2", 00:17:55.769 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:17:55.769 "is_configured": true, 00:17:55.769 "data_offset": 0, 00:17:55.769 "data_size": 65536 00:17:55.769 }, 00:17:55.769 { 00:17:55.769 "name": "BaseBdev3", 00:17:55.769 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:17:55.769 "is_configured": true, 00:17:55.769 "data_offset": 0, 00:17:55.769 "data_size": 65536 00:17:55.769 } 00:17:55.769 ] 00:17:55.769 }' 00:17:55.769 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.769 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:56.335 [2024-11-15 10:46:26.778224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:56.335 10:46:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:56.662 [2024-11-15 10:46:27.166158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:56.662 /dev/nbd0 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:56.937 1+0 records in 00:17:56.937 1+0 records out 00:17:56.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519407 s, 7.9 MB/s 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:56.937 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:57.197 512+0 records in 00:17:57.197 512+0 records out 00:17:57.197 67108864 bytes (67 MB, 64 MiB) copied, 0.464034 s, 145 MB/s 00:17:57.197 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:57.197 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.197 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:57.197 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:57.197 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:57.197 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.197 10:46:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:57.455 [2024-11-15 10:46:28.004963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.714 [2024-11-15 10:46:28.046521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.714 "name": "raid_bdev1", 00:17:57.714 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:17:57.714 "strip_size_kb": 64, 00:17:57.714 "state": "online", 00:17:57.714 "raid_level": "raid5f", 00:17:57.714 "superblock": false, 00:17:57.714 "num_base_bdevs": 3, 00:17:57.714 "num_base_bdevs_discovered": 2, 00:17:57.714 "num_base_bdevs_operational": 2, 00:17:57.714 "base_bdevs_list": [ 00:17:57.714 { 00:17:57.714 "name": null, 00:17:57.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.714 "is_configured": false, 00:17:57.714 "data_offset": 0, 00:17:57.714 "data_size": 65536 00:17:57.714 }, 00:17:57.714 { 00:17:57.714 "name": "BaseBdev2", 00:17:57.714 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:17:57.714 "is_configured": true, 00:17:57.714 "data_offset": 0, 00:17:57.714 "data_size": 65536 00:17:57.714 }, 00:17:57.714 { 00:17:57.714 "name": "BaseBdev3", 00:17:57.714 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:17:57.714 "is_configured": true, 00:17:57.714 "data_offset": 0, 00:17:57.714 "data_size": 65536 00:17:57.714 } 00:17:57.714 ] 00:17:57.714 }' 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.714 10:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.282 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.282 10:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.282 10:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.282 [2024-11-15 10:46:28.582621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.282 [2024-11-15 10:46:28.597334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:58.282 10:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.282 10:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:58.282 [2024-11-15 10:46:28.604643] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.221 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.221 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.221 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.221 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.221 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.221 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.221 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.222 10:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.222 10:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.222 10:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.222 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.222 "name": "raid_bdev1", 00:17:59.222 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:17:59.222 "strip_size_kb": 64, 00:17:59.222 "state": "online", 00:17:59.222 "raid_level": "raid5f", 00:17:59.222 "superblock": false, 00:17:59.222 "num_base_bdevs": 3, 00:17:59.222 "num_base_bdevs_discovered": 3, 00:17:59.222 "num_base_bdevs_operational": 3, 00:17:59.222 "process": { 00:17:59.222 "type": "rebuild", 00:17:59.222 "target": "spare", 00:17:59.222 "progress": { 00:17:59.222 "blocks": 18432, 00:17:59.222 "percent": 14 00:17:59.222 } 00:17:59.222 }, 00:17:59.222 "base_bdevs_list": [ 00:17:59.222 { 00:17:59.222 "name": "spare", 00:17:59.222 "uuid": "413071d1-2f33-590a-baf5-990f49aa1214", 00:17:59.222 "is_configured": true, 00:17:59.222 "data_offset": 0, 00:17:59.222 "data_size": 65536 00:17:59.222 }, 00:17:59.222 { 00:17:59.222 "name": "BaseBdev2", 00:17:59.222 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:17:59.222 "is_configured": true, 00:17:59.223 "data_offset": 0, 00:17:59.223 "data_size": 65536 00:17:59.223 }, 00:17:59.223 { 00:17:59.223 "name": "BaseBdev3", 00:17:59.223 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:17:59.223 "is_configured": true, 00:17:59.223 "data_offset": 0, 00:17:59.223 "data_size": 65536 00:17:59.223 } 00:17:59.223 ] 00:17:59.223 }' 00:17:59.223 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.223 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.223 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.223 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.223 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:59.223 10:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.223 10:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.223 [2024-11-15 10:46:29.774409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.483 [2024-11-15 10:46:29.818776] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:59.483 [2024-11-15 10:46:29.818903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.483 [2024-11-15 10:46:29.818938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.483 [2024-11-15 10:46:29.818952] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.483 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.483 "name": "raid_bdev1", 00:17:59.483 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:17:59.483 "strip_size_kb": 64, 00:17:59.483 "state": "online", 00:17:59.483 "raid_level": "raid5f", 00:17:59.483 "superblock": false, 00:17:59.483 "num_base_bdevs": 3, 00:17:59.483 "num_base_bdevs_discovered": 2, 00:17:59.483 "num_base_bdevs_operational": 2, 00:17:59.483 "base_bdevs_list": [ 00:17:59.483 { 00:17:59.483 "name": null, 00:17:59.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.483 "is_configured": false, 00:17:59.483 "data_offset": 0, 00:17:59.483 "data_size": 65536 00:17:59.483 }, 00:17:59.483 { 00:17:59.483 "name": "BaseBdev2", 00:17:59.483 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:17:59.483 "is_configured": true, 00:17:59.484 "data_offset": 0, 00:17:59.484 "data_size": 65536 00:17:59.484 }, 00:17:59.484 { 00:17:59.484 "name": "BaseBdev3", 00:17:59.484 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:17:59.484 "is_configured": true, 00:17:59.484 "data_offset": 0, 00:17:59.484 "data_size": 65536 00:17:59.484 } 00:17:59.484 ] 00:17:59.484 }' 00:17:59.484 10:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.484 10:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.050 "name": "raid_bdev1", 00:18:00.050 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:18:00.050 "strip_size_kb": 64, 00:18:00.050 "state": "online", 00:18:00.050 "raid_level": "raid5f", 00:18:00.050 "superblock": false, 00:18:00.050 "num_base_bdevs": 3, 00:18:00.050 "num_base_bdevs_discovered": 2, 00:18:00.050 "num_base_bdevs_operational": 2, 00:18:00.050 "base_bdevs_list": [ 00:18:00.050 { 00:18:00.050 "name": null, 00:18:00.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.050 "is_configured": false, 00:18:00.050 "data_offset": 0, 00:18:00.050 "data_size": 65536 00:18:00.050 }, 00:18:00.050 { 00:18:00.050 "name": "BaseBdev2", 00:18:00.050 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:18:00.050 "is_configured": true, 00:18:00.050 "data_offset": 0, 00:18:00.050 "data_size": 65536 00:18:00.050 }, 00:18:00.050 { 00:18:00.050 "name": "BaseBdev3", 00:18:00.050 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:18:00.050 "is_configured": true, 00:18:00.050 "data_offset": 0, 00:18:00.050 "data_size": 65536 00:18:00.050 } 00:18:00.050 ] 00:18:00.050 }' 00:18:00.050 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.051 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.051 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.051 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.051 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:00.051 10:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.051 10:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.051 [2024-11-15 10:46:30.525403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.051 [2024-11-15 10:46:30.539069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:00.051 10:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.051 10:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:00.051 [2024-11-15 10:46:30.546497] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.424 "name": "raid_bdev1", 00:18:01.424 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:18:01.424 "strip_size_kb": 64, 00:18:01.424 "state": "online", 00:18:01.424 "raid_level": "raid5f", 00:18:01.424 "superblock": false, 00:18:01.424 "num_base_bdevs": 3, 00:18:01.424 "num_base_bdevs_discovered": 3, 00:18:01.424 "num_base_bdevs_operational": 3, 00:18:01.424 "process": { 00:18:01.424 "type": "rebuild", 00:18:01.424 "target": "spare", 00:18:01.424 "progress": { 00:18:01.424 "blocks": 18432, 00:18:01.424 "percent": 14 00:18:01.424 } 00:18:01.424 }, 00:18:01.424 "base_bdevs_list": [ 00:18:01.424 { 00:18:01.424 "name": "spare", 00:18:01.424 "uuid": "413071d1-2f33-590a-baf5-990f49aa1214", 00:18:01.424 "is_configured": true, 00:18:01.424 "data_offset": 0, 00:18:01.424 "data_size": 65536 00:18:01.424 }, 00:18:01.424 { 00:18:01.424 "name": "BaseBdev2", 00:18:01.424 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:18:01.424 "is_configured": true, 00:18:01.424 "data_offset": 0, 00:18:01.424 "data_size": 65536 00:18:01.424 }, 00:18:01.424 { 00:18:01.424 "name": "BaseBdev3", 00:18:01.424 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:18:01.424 "is_configured": true, 00:18:01.424 "data_offset": 0, 00:18:01.424 "data_size": 65536 00:18:01.424 } 00:18:01.424 ] 00:18:01.424 }' 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.424 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=585 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.425 "name": "raid_bdev1", 00:18:01.425 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:18:01.425 "strip_size_kb": 64, 00:18:01.425 "state": "online", 00:18:01.425 "raid_level": "raid5f", 00:18:01.425 "superblock": false, 00:18:01.425 "num_base_bdevs": 3, 00:18:01.425 "num_base_bdevs_discovered": 3, 00:18:01.425 "num_base_bdevs_operational": 3, 00:18:01.425 "process": { 00:18:01.425 "type": "rebuild", 00:18:01.425 "target": "spare", 00:18:01.425 "progress": { 00:18:01.425 "blocks": 22528, 00:18:01.425 "percent": 17 00:18:01.425 } 00:18:01.425 }, 00:18:01.425 "base_bdevs_list": [ 00:18:01.425 { 00:18:01.425 "name": "spare", 00:18:01.425 "uuid": "413071d1-2f33-590a-baf5-990f49aa1214", 00:18:01.425 "is_configured": true, 00:18:01.425 "data_offset": 0, 00:18:01.425 "data_size": 65536 00:18:01.425 }, 00:18:01.425 { 00:18:01.425 "name": "BaseBdev2", 00:18:01.425 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:18:01.425 "is_configured": true, 00:18:01.425 "data_offset": 0, 00:18:01.425 "data_size": 65536 00:18:01.425 }, 00:18:01.425 { 00:18:01.425 "name": "BaseBdev3", 00:18:01.425 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:18:01.425 "is_configured": true, 00:18:01.425 "data_offset": 0, 00:18:01.425 "data_size": 65536 00:18:01.425 } 00:18:01.425 ] 00:18:01.425 }' 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.425 10:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.359 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.359 "name": "raid_bdev1", 00:18:02.359 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:18:02.359 "strip_size_kb": 64, 00:18:02.359 "state": "online", 00:18:02.359 "raid_level": "raid5f", 00:18:02.359 "superblock": false, 00:18:02.359 "num_base_bdevs": 3, 00:18:02.359 "num_base_bdevs_discovered": 3, 00:18:02.359 "num_base_bdevs_operational": 3, 00:18:02.359 "process": { 00:18:02.359 "type": "rebuild", 00:18:02.359 "target": "spare", 00:18:02.359 "progress": { 00:18:02.359 "blocks": 45056, 00:18:02.359 "percent": 34 00:18:02.359 } 00:18:02.359 }, 00:18:02.359 "base_bdevs_list": [ 00:18:02.359 { 00:18:02.359 "name": "spare", 00:18:02.359 "uuid": "413071d1-2f33-590a-baf5-990f49aa1214", 00:18:02.359 "is_configured": true, 00:18:02.359 "data_offset": 0, 00:18:02.359 "data_size": 65536 00:18:02.359 }, 00:18:02.359 { 00:18:02.359 "name": "BaseBdev2", 00:18:02.359 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:18:02.359 "is_configured": true, 00:18:02.359 "data_offset": 0, 00:18:02.359 "data_size": 65536 00:18:02.359 }, 00:18:02.359 { 00:18:02.359 "name": "BaseBdev3", 00:18:02.359 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:18:02.359 "is_configured": true, 00:18:02.359 "data_offset": 0, 00:18:02.359 "data_size": 65536 00:18:02.359 } 00:18:02.359 ] 00:18:02.359 }' 00:18:02.617 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.617 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.617 10:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.617 10:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.617 10:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.555 "name": "raid_bdev1", 00:18:03.555 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:18:03.555 "strip_size_kb": 64, 00:18:03.555 "state": "online", 00:18:03.555 "raid_level": "raid5f", 00:18:03.555 "superblock": false, 00:18:03.555 "num_base_bdevs": 3, 00:18:03.555 "num_base_bdevs_discovered": 3, 00:18:03.555 "num_base_bdevs_operational": 3, 00:18:03.555 "process": { 00:18:03.555 "type": "rebuild", 00:18:03.555 "target": "spare", 00:18:03.555 "progress": { 00:18:03.555 "blocks": 69632, 00:18:03.555 "percent": 53 00:18:03.555 } 00:18:03.555 }, 00:18:03.555 "base_bdevs_list": [ 00:18:03.555 { 00:18:03.555 "name": "spare", 00:18:03.555 "uuid": "413071d1-2f33-590a-baf5-990f49aa1214", 00:18:03.555 "is_configured": true, 00:18:03.555 "data_offset": 0, 00:18:03.555 "data_size": 65536 00:18:03.555 }, 00:18:03.555 { 00:18:03.555 "name": "BaseBdev2", 00:18:03.555 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:18:03.555 "is_configured": true, 00:18:03.555 "data_offset": 0, 00:18:03.555 "data_size": 65536 00:18:03.555 }, 00:18:03.555 { 00:18:03.555 "name": "BaseBdev3", 00:18:03.555 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:18:03.555 "is_configured": true, 00:18:03.555 "data_offset": 0, 00:18:03.555 "data_size": 65536 00:18:03.555 } 00:18:03.555 ] 00:18:03.555 }' 00:18:03.555 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.813 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.813 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.813 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.813 10:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.747 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.747 "name": "raid_bdev1", 00:18:04.747 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:18:04.747 "strip_size_kb": 64, 00:18:04.747 "state": "online", 00:18:04.747 "raid_level": "raid5f", 00:18:04.747 "superblock": false, 00:18:04.747 "num_base_bdevs": 3, 00:18:04.747 "num_base_bdevs_discovered": 3, 00:18:04.747 "num_base_bdevs_operational": 3, 00:18:04.747 "process": { 00:18:04.747 "type": "rebuild", 00:18:04.748 "target": "spare", 00:18:04.748 "progress": { 00:18:04.748 "blocks": 92160, 00:18:04.748 "percent": 70 00:18:04.748 } 00:18:04.748 }, 00:18:04.748 "base_bdevs_list": [ 00:18:04.748 { 00:18:04.748 "name": "spare", 00:18:04.748 "uuid": "413071d1-2f33-590a-baf5-990f49aa1214", 00:18:04.748 "is_configured": true, 00:18:04.748 "data_offset": 0, 00:18:04.748 "data_size": 65536 00:18:04.748 }, 00:18:04.748 { 00:18:04.748 "name": "BaseBdev2", 00:18:04.748 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:18:04.748 "is_configured": true, 00:18:04.748 "data_offset": 0, 00:18:04.748 "data_size": 65536 00:18:04.748 }, 00:18:04.748 { 00:18:04.748 "name": "BaseBdev3", 00:18:04.748 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:18:04.748 "is_configured": true, 00:18:04.748 "data_offset": 0, 00:18:04.748 "data_size": 65536 00:18:04.748 } 00:18:04.748 ] 00:18:04.748 }' 00:18:04.748 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.748 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.748 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.011 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.011 10:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.950 "name": "raid_bdev1", 00:18:05.950 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:18:05.950 "strip_size_kb": 64, 00:18:05.950 "state": "online", 00:18:05.950 "raid_level": "raid5f", 00:18:05.950 "superblock": false, 00:18:05.950 "num_base_bdevs": 3, 00:18:05.950 "num_base_bdevs_discovered": 3, 00:18:05.950 "num_base_bdevs_operational": 3, 00:18:05.950 "process": { 00:18:05.950 "type": "rebuild", 00:18:05.950 "target": "spare", 00:18:05.950 "progress": { 00:18:05.950 "blocks": 116736, 00:18:05.950 "percent": 89 00:18:05.950 } 00:18:05.950 }, 00:18:05.950 "base_bdevs_list": [ 00:18:05.950 { 00:18:05.950 "name": "spare", 00:18:05.950 "uuid": "413071d1-2f33-590a-baf5-990f49aa1214", 00:18:05.950 "is_configured": true, 00:18:05.950 "data_offset": 0, 00:18:05.950 "data_size": 65536 00:18:05.950 }, 00:18:05.950 { 00:18:05.950 "name": "BaseBdev2", 00:18:05.950 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:18:05.950 "is_configured": true, 00:18:05.950 "data_offset": 0, 00:18:05.950 "data_size": 65536 00:18:05.950 }, 00:18:05.950 { 00:18:05.950 "name": "BaseBdev3", 00:18:05.950 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:18:05.950 "is_configured": true, 00:18:05.950 "data_offset": 0, 00:18:05.950 "data_size": 65536 00:18:05.950 } 00:18:05.950 ] 00:18:05.950 }' 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.950 10:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:06.517 [2024-11-15 10:46:37.017443] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:06.517 [2024-11-15 10:46:37.017568] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:06.517 [2024-11-15 10:46:37.017633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.083 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.084 "name": "raid_bdev1", 00:18:07.084 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:18:07.084 "strip_size_kb": 64, 00:18:07.084 "state": "online", 00:18:07.084 "raid_level": "raid5f", 00:18:07.084 "superblock": false, 00:18:07.084 "num_base_bdevs": 3, 00:18:07.084 "num_base_bdevs_discovered": 3, 00:18:07.084 "num_base_bdevs_operational": 3, 00:18:07.084 "base_bdevs_list": [ 00:18:07.084 { 00:18:07.084 "name": "spare", 00:18:07.084 "uuid": "413071d1-2f33-590a-baf5-990f49aa1214", 00:18:07.084 "is_configured": true, 00:18:07.084 "data_offset": 0, 00:18:07.084 "data_size": 65536 00:18:07.084 }, 00:18:07.084 { 00:18:07.084 "name": "BaseBdev2", 00:18:07.084 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:18:07.084 "is_configured": true, 00:18:07.084 "data_offset": 0, 00:18:07.084 "data_size": 65536 00:18:07.084 }, 00:18:07.084 { 00:18:07.084 "name": "BaseBdev3", 00:18:07.084 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:18:07.084 "is_configured": true, 00:18:07.084 "data_offset": 0, 00:18:07.084 "data_size": 65536 00:18:07.084 } 00:18:07.084 ] 00:18:07.084 }' 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:07.084 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.343 "name": "raid_bdev1", 00:18:07.343 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:18:07.343 "strip_size_kb": 64, 00:18:07.343 "state": "online", 00:18:07.343 "raid_level": "raid5f", 00:18:07.343 "superblock": false, 00:18:07.343 "num_base_bdevs": 3, 00:18:07.343 "num_base_bdevs_discovered": 3, 00:18:07.343 "num_base_bdevs_operational": 3, 00:18:07.343 "base_bdevs_list": [ 00:18:07.343 { 00:18:07.343 "name": "spare", 00:18:07.343 "uuid": "413071d1-2f33-590a-baf5-990f49aa1214", 00:18:07.343 "is_configured": true, 00:18:07.343 "data_offset": 0, 00:18:07.343 "data_size": 65536 00:18:07.343 }, 00:18:07.343 { 00:18:07.343 "name": "BaseBdev2", 00:18:07.343 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:18:07.343 "is_configured": true, 00:18:07.343 "data_offset": 0, 00:18:07.343 "data_size": 65536 00:18:07.343 }, 00:18:07.343 { 00:18:07.343 "name": "BaseBdev3", 00:18:07.343 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:18:07.343 "is_configured": true, 00:18:07.343 "data_offset": 0, 00:18:07.343 "data_size": 65536 00:18:07.343 } 00:18:07.343 ] 00:18:07.343 }' 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.343 "name": "raid_bdev1", 00:18:07.343 "uuid": "0742c655-2ec8-42f7-aa65-79122f14d96f", 00:18:07.343 "strip_size_kb": 64, 00:18:07.343 "state": "online", 00:18:07.343 "raid_level": "raid5f", 00:18:07.343 "superblock": false, 00:18:07.343 "num_base_bdevs": 3, 00:18:07.343 "num_base_bdevs_discovered": 3, 00:18:07.343 "num_base_bdevs_operational": 3, 00:18:07.343 "base_bdevs_list": [ 00:18:07.343 { 00:18:07.343 "name": "spare", 00:18:07.343 "uuid": "413071d1-2f33-590a-baf5-990f49aa1214", 00:18:07.343 "is_configured": true, 00:18:07.343 "data_offset": 0, 00:18:07.343 "data_size": 65536 00:18:07.343 }, 00:18:07.343 { 00:18:07.343 "name": "BaseBdev2", 00:18:07.343 "uuid": "7ea68f7f-59cf-54dd-a341-87162c7264c8", 00:18:07.343 "is_configured": true, 00:18:07.343 "data_offset": 0, 00:18:07.343 "data_size": 65536 00:18:07.343 }, 00:18:07.343 { 00:18:07.343 "name": "BaseBdev3", 00:18:07.343 "uuid": "cb47fd3a-c5d7-5718-ba90-26949cbf3935", 00:18:07.343 "is_configured": true, 00:18:07.343 "data_offset": 0, 00:18:07.343 "data_size": 65536 00:18:07.343 } 00:18:07.343 ] 00:18:07.343 }' 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.343 10:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.910 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:07.910 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.910 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.910 [2024-11-15 10:46:38.340198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.910 [2024-11-15 10:46:38.340234] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.910 [2024-11-15 10:46:38.340333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.910 [2024-11-15 10:46:38.340470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.910 [2024-11-15 10:46:38.340496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:07.910 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.910 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.910 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.910 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.910 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:07.910 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:07.911 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:08.478 /dev/nbd0 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.478 1+0 records in 00:18:08.478 1+0 records out 00:18:08.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615729 s, 6.7 MB/s 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.478 10:46:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:08.738 /dev/nbd1 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.738 1+0 records in 00:18:08.738 1+0 records out 00:18:08.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418611 s, 9.8 MB/s 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.738 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:08.997 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:08.997 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.997 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:08.997 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:08.997 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:08.997 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.997 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:09.255 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:09.255 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:09.255 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:09.255 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.255 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.255 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:09.255 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:09.255 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.255 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.255 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82099 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 82099 ']' 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 82099 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82099 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:09.583 killing process with pid 82099 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82099' 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 82099 00:18:09.583 Received shutdown signal, test time was about 60.000000 seconds 00:18:09.583 00:18:09.583 Latency(us) 00:18:09.583 [2024-11-15T10:46:40.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.583 [2024-11-15T10:46:40.143Z] =================================================================================================================== 00:18:09.583 [2024-11-15T10:46:40.143Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.583 [2024-11-15 10:46:39.975212] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.583 10:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 82099 00:18:09.842 [2024-11-15 10:46:40.319359] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:10.775 10:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:10.775 00:18:10.775 real 0m16.449s 00:18:10.775 user 0m21.258s 00:18:10.775 sys 0m1.922s 00:18:10.775 10:46:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:10.775 ************************************ 00:18:10.775 END TEST raid5f_rebuild_test 00:18:10.775 ************************************ 00:18:10.775 10:46:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.034 10:46:41 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:18:11.034 10:46:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:11.034 10:46:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:11.034 10:46:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.034 ************************************ 00:18:11.034 START TEST raid5f_rebuild_test_sb 00:18:11.034 ************************************ 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82545 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82545 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82545 ']' 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:11.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:11.034 10:46:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.034 [2024-11-15 10:46:41.461659] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:18:11.034 [2024-11-15 10:46:41.461819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:11.034 Zero copy mechanism will not be used. 00:18:11.034 -allocations --file-prefix=spdk_pid82545 ] 00:18:11.292 [2024-11-15 10:46:41.634504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.292 [2024-11-15 10:46:41.741766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.550 [2024-11-15 10:46:41.923472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.550 [2024-11-15 10:46:41.923532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.117 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:12.117 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:12.117 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.117 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:12.117 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.117 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.117 BaseBdev1_malloc 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.118 [2024-11-15 10:46:42.575140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:12.118 [2024-11-15 10:46:42.575251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.118 [2024-11-15 10:46:42.575288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:12.118 [2024-11-15 10:46:42.575306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.118 [2024-11-15 10:46:42.578148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.118 [2024-11-15 10:46:42.578210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:12.118 BaseBdev1 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.118 BaseBdev2_malloc 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.118 [2024-11-15 10:46:42.627507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:12.118 [2024-11-15 10:46:42.627599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.118 [2024-11-15 10:46:42.627635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:12.118 [2024-11-15 10:46:42.627653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.118 [2024-11-15 10:46:42.630303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.118 [2024-11-15 10:46:42.630371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:12.118 BaseBdev2 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.118 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.378 BaseBdev3_malloc 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.378 [2024-11-15 10:46:42.683853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:12.378 [2024-11-15 10:46:42.683942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.378 [2024-11-15 10:46:42.683979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:12.378 [2024-11-15 10:46:42.683998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.378 [2024-11-15 10:46:42.686738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.378 [2024-11-15 10:46:42.686796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:12.378 BaseBdev3 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.378 spare_malloc 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.378 spare_delay 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.378 [2024-11-15 10:46:42.743825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.378 [2024-11-15 10:46:42.743899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.378 [2024-11-15 10:46:42.743929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:12.378 [2024-11-15 10:46:42.743946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.378 [2024-11-15 10:46:42.746645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.378 [2024-11-15 10:46:42.746700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.378 spare 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.378 [2024-11-15 10:46:42.751920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.378 [2024-11-15 10:46:42.754170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.378 [2024-11-15 10:46:42.754270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.378 [2024-11-15 10:46:42.754550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:12.378 [2024-11-15 10:46:42.754580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:12.378 [2024-11-15 10:46:42.754932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:12.378 [2024-11-15 10:46:42.760188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:12.378 [2024-11-15 10:46:42.760230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:12.378 [2024-11-15 10:46:42.760525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.378 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.379 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.379 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.379 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.379 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.379 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.379 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.379 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.379 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.379 "name": "raid_bdev1", 00:18:12.379 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:12.379 "strip_size_kb": 64, 00:18:12.379 "state": "online", 00:18:12.379 "raid_level": "raid5f", 00:18:12.379 "superblock": true, 00:18:12.379 "num_base_bdevs": 3, 00:18:12.379 "num_base_bdevs_discovered": 3, 00:18:12.379 "num_base_bdevs_operational": 3, 00:18:12.379 "base_bdevs_list": [ 00:18:12.379 { 00:18:12.379 "name": "BaseBdev1", 00:18:12.379 "uuid": "560f5a22-fa18-5e05-9715-75f7cae394bd", 00:18:12.379 "is_configured": true, 00:18:12.379 "data_offset": 2048, 00:18:12.379 "data_size": 63488 00:18:12.379 }, 00:18:12.379 { 00:18:12.379 "name": "BaseBdev2", 00:18:12.379 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:12.379 "is_configured": true, 00:18:12.379 "data_offset": 2048, 00:18:12.379 "data_size": 63488 00:18:12.379 }, 00:18:12.379 { 00:18:12.379 "name": "BaseBdev3", 00:18:12.379 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:12.379 "is_configured": true, 00:18:12.379 "data_offset": 2048, 00:18:12.379 "data_size": 63488 00:18:12.379 } 00:18:12.379 ] 00:18:12.379 }' 00:18:12.379 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.379 10:46:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.946 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:12.946 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:12.947 [2024-11-15 10:46:43.370471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:12.947 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:13.206 [2024-11-15 10:46:43.738203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:13.206 /dev/nbd0 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.465 1+0 records in 00:18:13.465 1+0 records out 00:18:13.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347728 s, 11.8 MB/s 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:13.465 10:46:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:18:14.033 496+0 records in 00:18:14.033 496+0 records out 00:18:14.033 65011712 bytes (65 MB, 62 MiB) copied, 0.496779 s, 131 MB/s 00:18:14.033 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:14.033 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:14.033 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:14.033 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:14.033 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:14.033 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.033 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:14.301 [2024-11-15 10:46:44.617614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.301 [2024-11-15 10:46:44.635059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.301 "name": "raid_bdev1", 00:18:14.301 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:14.301 "strip_size_kb": 64, 00:18:14.301 "state": "online", 00:18:14.301 "raid_level": "raid5f", 00:18:14.301 "superblock": true, 00:18:14.301 "num_base_bdevs": 3, 00:18:14.301 "num_base_bdevs_discovered": 2, 00:18:14.301 "num_base_bdevs_operational": 2, 00:18:14.301 "base_bdevs_list": [ 00:18:14.301 { 00:18:14.301 "name": null, 00:18:14.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.301 "is_configured": false, 00:18:14.301 "data_offset": 0, 00:18:14.301 "data_size": 63488 00:18:14.301 }, 00:18:14.301 { 00:18:14.301 "name": "BaseBdev2", 00:18:14.301 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:14.301 "is_configured": true, 00:18:14.301 "data_offset": 2048, 00:18:14.301 "data_size": 63488 00:18:14.301 }, 00:18:14.301 { 00:18:14.301 "name": "BaseBdev3", 00:18:14.301 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:14.301 "is_configured": true, 00:18:14.301 "data_offset": 2048, 00:18:14.301 "data_size": 63488 00:18:14.301 } 00:18:14.301 ] 00:18:14.301 }' 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.301 10:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.866 10:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.866 10:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.866 10:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.866 [2024-11-15 10:46:45.147191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.866 [2024-11-15 10:46:45.161740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:18:14.866 10:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.867 10:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:14.867 [2024-11-15 10:46:45.169025] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.804 "name": "raid_bdev1", 00:18:15.804 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:15.804 "strip_size_kb": 64, 00:18:15.804 "state": "online", 00:18:15.804 "raid_level": "raid5f", 00:18:15.804 "superblock": true, 00:18:15.804 "num_base_bdevs": 3, 00:18:15.804 "num_base_bdevs_discovered": 3, 00:18:15.804 "num_base_bdevs_operational": 3, 00:18:15.804 "process": { 00:18:15.804 "type": "rebuild", 00:18:15.804 "target": "spare", 00:18:15.804 "progress": { 00:18:15.804 "blocks": 18432, 00:18:15.804 "percent": 14 00:18:15.804 } 00:18:15.804 }, 00:18:15.804 "base_bdevs_list": [ 00:18:15.804 { 00:18:15.804 "name": "spare", 00:18:15.804 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:15.804 "is_configured": true, 00:18:15.804 "data_offset": 2048, 00:18:15.804 "data_size": 63488 00:18:15.804 }, 00:18:15.804 { 00:18:15.804 "name": "BaseBdev2", 00:18:15.804 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:15.804 "is_configured": true, 00:18:15.804 "data_offset": 2048, 00:18:15.804 "data_size": 63488 00:18:15.804 }, 00:18:15.804 { 00:18:15.804 "name": "BaseBdev3", 00:18:15.804 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:15.804 "is_configured": true, 00:18:15.804 "data_offset": 2048, 00:18:15.804 "data_size": 63488 00:18:15.804 } 00:18:15.804 ] 00:18:15.804 }' 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.804 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.804 [2024-11-15 10:46:46.338561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.064 [2024-11-15 10:46:46.382891] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:16.064 [2024-11-15 10:46:46.383223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.064 [2024-11-15 10:46:46.383435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.064 [2024-11-15 10:46:46.383563] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.064 "name": "raid_bdev1", 00:18:16.064 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:16.064 "strip_size_kb": 64, 00:18:16.064 "state": "online", 00:18:16.064 "raid_level": "raid5f", 00:18:16.064 "superblock": true, 00:18:16.064 "num_base_bdevs": 3, 00:18:16.064 "num_base_bdevs_discovered": 2, 00:18:16.064 "num_base_bdevs_operational": 2, 00:18:16.064 "base_bdevs_list": [ 00:18:16.064 { 00:18:16.064 "name": null, 00:18:16.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.064 "is_configured": false, 00:18:16.064 "data_offset": 0, 00:18:16.064 "data_size": 63488 00:18:16.064 }, 00:18:16.064 { 00:18:16.064 "name": "BaseBdev2", 00:18:16.064 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:16.064 "is_configured": true, 00:18:16.064 "data_offset": 2048, 00:18:16.064 "data_size": 63488 00:18:16.064 }, 00:18:16.064 { 00:18:16.064 "name": "BaseBdev3", 00:18:16.064 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:16.064 "is_configured": true, 00:18:16.064 "data_offset": 2048, 00:18:16.064 "data_size": 63488 00:18:16.064 } 00:18:16.064 ] 00:18:16.064 }' 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.064 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.417 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.417 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.417 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.417 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.417 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.417 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.417 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.417 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.417 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.676 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.676 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.676 "name": "raid_bdev1", 00:18:16.676 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:16.676 "strip_size_kb": 64, 00:18:16.676 "state": "online", 00:18:16.676 "raid_level": "raid5f", 00:18:16.676 "superblock": true, 00:18:16.676 "num_base_bdevs": 3, 00:18:16.676 "num_base_bdevs_discovered": 2, 00:18:16.676 "num_base_bdevs_operational": 2, 00:18:16.676 "base_bdevs_list": [ 00:18:16.676 { 00:18:16.676 "name": null, 00:18:16.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.676 "is_configured": false, 00:18:16.676 "data_offset": 0, 00:18:16.676 "data_size": 63488 00:18:16.676 }, 00:18:16.676 { 00:18:16.676 "name": "BaseBdev2", 00:18:16.676 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:16.676 "is_configured": true, 00:18:16.677 "data_offset": 2048, 00:18:16.677 "data_size": 63488 00:18:16.677 }, 00:18:16.677 { 00:18:16.677 "name": "BaseBdev3", 00:18:16.677 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:16.677 "is_configured": true, 00:18:16.677 "data_offset": 2048, 00:18:16.677 "data_size": 63488 00:18:16.677 } 00:18:16.677 ] 00:18:16.677 }' 00:18:16.677 10:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.677 10:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.677 10:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.677 10:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.677 10:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:16.677 10:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.677 10:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.677 [2024-11-15 10:46:47.109423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.677 [2024-11-15 10:46:47.123206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:18:16.677 10:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.677 10:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:16.677 [2024-11-15 10:46:47.130609] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.635 "name": "raid_bdev1", 00:18:17.635 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:17.635 "strip_size_kb": 64, 00:18:17.635 "state": "online", 00:18:17.635 "raid_level": "raid5f", 00:18:17.635 "superblock": true, 00:18:17.635 "num_base_bdevs": 3, 00:18:17.635 "num_base_bdevs_discovered": 3, 00:18:17.635 "num_base_bdevs_operational": 3, 00:18:17.635 "process": { 00:18:17.635 "type": "rebuild", 00:18:17.635 "target": "spare", 00:18:17.635 "progress": { 00:18:17.635 "blocks": 18432, 00:18:17.635 "percent": 14 00:18:17.635 } 00:18:17.635 }, 00:18:17.635 "base_bdevs_list": [ 00:18:17.635 { 00:18:17.635 "name": "spare", 00:18:17.635 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:17.635 "is_configured": true, 00:18:17.635 "data_offset": 2048, 00:18:17.635 "data_size": 63488 00:18:17.635 }, 00:18:17.635 { 00:18:17.635 "name": "BaseBdev2", 00:18:17.635 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:17.635 "is_configured": true, 00:18:17.635 "data_offset": 2048, 00:18:17.635 "data_size": 63488 00:18:17.635 }, 00:18:17.635 { 00:18:17.635 "name": "BaseBdev3", 00:18:17.635 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:17.635 "is_configured": true, 00:18:17.635 "data_offset": 2048, 00:18:17.635 "data_size": 63488 00:18:17.635 } 00:18:17.635 ] 00:18:17.635 }' 00:18:17.635 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:17.894 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=602 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.894 "name": "raid_bdev1", 00:18:17.894 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:17.894 "strip_size_kb": 64, 00:18:17.894 "state": "online", 00:18:17.894 "raid_level": "raid5f", 00:18:17.894 "superblock": true, 00:18:17.894 "num_base_bdevs": 3, 00:18:17.894 "num_base_bdevs_discovered": 3, 00:18:17.894 "num_base_bdevs_operational": 3, 00:18:17.894 "process": { 00:18:17.894 "type": "rebuild", 00:18:17.894 "target": "spare", 00:18:17.894 "progress": { 00:18:17.894 "blocks": 22528, 00:18:17.894 "percent": 17 00:18:17.894 } 00:18:17.894 }, 00:18:17.894 "base_bdevs_list": [ 00:18:17.894 { 00:18:17.894 "name": "spare", 00:18:17.894 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:17.894 "is_configured": true, 00:18:17.894 "data_offset": 2048, 00:18:17.894 "data_size": 63488 00:18:17.894 }, 00:18:17.894 { 00:18:17.894 "name": "BaseBdev2", 00:18:17.894 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:17.894 "is_configured": true, 00:18:17.894 "data_offset": 2048, 00:18:17.894 "data_size": 63488 00:18:17.894 }, 00:18:17.894 { 00:18:17.894 "name": "BaseBdev3", 00:18:17.894 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:17.894 "is_configured": true, 00:18:17.894 "data_offset": 2048, 00:18:17.894 "data_size": 63488 00:18:17.894 } 00:18:17.894 ] 00:18:17.894 }' 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.894 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.153 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.153 10:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.089 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.089 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.089 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.089 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.089 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.090 "name": "raid_bdev1", 00:18:19.090 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:19.090 "strip_size_kb": 64, 00:18:19.090 "state": "online", 00:18:19.090 "raid_level": "raid5f", 00:18:19.090 "superblock": true, 00:18:19.090 "num_base_bdevs": 3, 00:18:19.090 "num_base_bdevs_discovered": 3, 00:18:19.090 "num_base_bdevs_operational": 3, 00:18:19.090 "process": { 00:18:19.090 "type": "rebuild", 00:18:19.090 "target": "spare", 00:18:19.090 "progress": { 00:18:19.090 "blocks": 47104, 00:18:19.090 "percent": 37 00:18:19.090 } 00:18:19.090 }, 00:18:19.090 "base_bdevs_list": [ 00:18:19.090 { 00:18:19.090 "name": "spare", 00:18:19.090 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:19.090 "is_configured": true, 00:18:19.090 "data_offset": 2048, 00:18:19.090 "data_size": 63488 00:18:19.090 }, 00:18:19.090 { 00:18:19.090 "name": "BaseBdev2", 00:18:19.090 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:19.090 "is_configured": true, 00:18:19.090 "data_offset": 2048, 00:18:19.090 "data_size": 63488 00:18:19.090 }, 00:18:19.090 { 00:18:19.090 "name": "BaseBdev3", 00:18:19.090 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:19.090 "is_configured": true, 00:18:19.090 "data_offset": 2048, 00:18:19.090 "data_size": 63488 00:18:19.090 } 00:18:19.090 ] 00:18:19.090 }' 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.090 10:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.465 "name": "raid_bdev1", 00:18:20.465 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:20.465 "strip_size_kb": 64, 00:18:20.465 "state": "online", 00:18:20.465 "raid_level": "raid5f", 00:18:20.465 "superblock": true, 00:18:20.465 "num_base_bdevs": 3, 00:18:20.465 "num_base_bdevs_discovered": 3, 00:18:20.465 "num_base_bdevs_operational": 3, 00:18:20.465 "process": { 00:18:20.465 "type": "rebuild", 00:18:20.465 "target": "spare", 00:18:20.465 "progress": { 00:18:20.465 "blocks": 69632, 00:18:20.465 "percent": 54 00:18:20.465 } 00:18:20.465 }, 00:18:20.465 "base_bdevs_list": [ 00:18:20.465 { 00:18:20.465 "name": "spare", 00:18:20.465 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:20.465 "is_configured": true, 00:18:20.465 "data_offset": 2048, 00:18:20.465 "data_size": 63488 00:18:20.465 }, 00:18:20.465 { 00:18:20.465 "name": "BaseBdev2", 00:18:20.465 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:20.465 "is_configured": true, 00:18:20.465 "data_offset": 2048, 00:18:20.465 "data_size": 63488 00:18:20.465 }, 00:18:20.465 { 00:18:20.465 "name": "BaseBdev3", 00:18:20.465 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:20.465 "is_configured": true, 00:18:20.465 "data_offset": 2048, 00:18:20.465 "data_size": 63488 00:18:20.465 } 00:18:20.465 ] 00:18:20.465 }' 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.465 10:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.402 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.402 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.402 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.402 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.402 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.403 "name": "raid_bdev1", 00:18:21.403 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:21.403 "strip_size_kb": 64, 00:18:21.403 "state": "online", 00:18:21.403 "raid_level": "raid5f", 00:18:21.403 "superblock": true, 00:18:21.403 "num_base_bdevs": 3, 00:18:21.403 "num_base_bdevs_discovered": 3, 00:18:21.403 "num_base_bdevs_operational": 3, 00:18:21.403 "process": { 00:18:21.403 "type": "rebuild", 00:18:21.403 "target": "spare", 00:18:21.403 "progress": { 00:18:21.403 "blocks": 94208, 00:18:21.403 "percent": 74 00:18:21.403 } 00:18:21.403 }, 00:18:21.403 "base_bdevs_list": [ 00:18:21.403 { 00:18:21.403 "name": "spare", 00:18:21.403 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:21.403 "is_configured": true, 00:18:21.403 "data_offset": 2048, 00:18:21.403 "data_size": 63488 00:18:21.403 }, 00:18:21.403 { 00:18:21.403 "name": "BaseBdev2", 00:18:21.403 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:21.403 "is_configured": true, 00:18:21.403 "data_offset": 2048, 00:18:21.403 "data_size": 63488 00:18:21.403 }, 00:18:21.403 { 00:18:21.403 "name": "BaseBdev3", 00:18:21.403 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:21.403 "is_configured": true, 00:18:21.403 "data_offset": 2048, 00:18:21.403 "data_size": 63488 00:18:21.403 } 00:18:21.403 ] 00:18:21.403 }' 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.403 10:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.778 10:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.778 10:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.778 "name": "raid_bdev1", 00:18:22.778 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:22.778 "strip_size_kb": 64, 00:18:22.778 "state": "online", 00:18:22.778 "raid_level": "raid5f", 00:18:22.778 "superblock": true, 00:18:22.778 "num_base_bdevs": 3, 00:18:22.778 "num_base_bdevs_discovered": 3, 00:18:22.778 "num_base_bdevs_operational": 3, 00:18:22.778 "process": { 00:18:22.778 "type": "rebuild", 00:18:22.778 "target": "spare", 00:18:22.778 "progress": { 00:18:22.778 "blocks": 116736, 00:18:22.778 "percent": 91 00:18:22.778 } 00:18:22.778 }, 00:18:22.778 "base_bdevs_list": [ 00:18:22.778 { 00:18:22.778 "name": "spare", 00:18:22.778 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:22.778 "is_configured": true, 00:18:22.778 "data_offset": 2048, 00:18:22.778 "data_size": 63488 00:18:22.778 }, 00:18:22.778 { 00:18:22.778 "name": "BaseBdev2", 00:18:22.778 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:22.778 "is_configured": true, 00:18:22.778 "data_offset": 2048, 00:18:22.778 "data_size": 63488 00:18:22.778 }, 00:18:22.778 { 00:18:22.778 "name": "BaseBdev3", 00:18:22.778 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:22.778 "is_configured": true, 00:18:22.778 "data_offset": 2048, 00:18:22.778 "data_size": 63488 00:18:22.778 } 00:18:22.778 ] 00:18:22.778 }' 00:18:22.778 10:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.778 10:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.778 10:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.778 10:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.778 10:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.037 [2024-11-15 10:46:53.401821] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:23.037 [2024-11-15 10:46:53.401950] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:23.037 [2024-11-15 10:46:53.402162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.606 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.865 "name": "raid_bdev1", 00:18:23.865 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:23.865 "strip_size_kb": 64, 00:18:23.865 "state": "online", 00:18:23.865 "raid_level": "raid5f", 00:18:23.865 "superblock": true, 00:18:23.865 "num_base_bdevs": 3, 00:18:23.865 "num_base_bdevs_discovered": 3, 00:18:23.865 "num_base_bdevs_operational": 3, 00:18:23.865 "base_bdevs_list": [ 00:18:23.865 { 00:18:23.865 "name": "spare", 00:18:23.865 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:23.865 "is_configured": true, 00:18:23.865 "data_offset": 2048, 00:18:23.865 "data_size": 63488 00:18:23.865 }, 00:18:23.865 { 00:18:23.865 "name": "BaseBdev2", 00:18:23.865 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:23.865 "is_configured": true, 00:18:23.865 "data_offset": 2048, 00:18:23.865 "data_size": 63488 00:18:23.865 }, 00:18:23.865 { 00:18:23.865 "name": "BaseBdev3", 00:18:23.865 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:23.865 "is_configured": true, 00:18:23.865 "data_offset": 2048, 00:18:23.865 "data_size": 63488 00:18:23.865 } 00:18:23.865 ] 00:18:23.865 }' 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.865 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.865 "name": "raid_bdev1", 00:18:23.866 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:23.866 "strip_size_kb": 64, 00:18:23.866 "state": "online", 00:18:23.866 "raid_level": "raid5f", 00:18:23.866 "superblock": true, 00:18:23.866 "num_base_bdevs": 3, 00:18:23.866 "num_base_bdevs_discovered": 3, 00:18:23.866 "num_base_bdevs_operational": 3, 00:18:23.866 "base_bdevs_list": [ 00:18:23.866 { 00:18:23.866 "name": "spare", 00:18:23.866 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:23.866 "is_configured": true, 00:18:23.866 "data_offset": 2048, 00:18:23.866 "data_size": 63488 00:18:23.866 }, 00:18:23.866 { 00:18:23.866 "name": "BaseBdev2", 00:18:23.866 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:23.866 "is_configured": true, 00:18:23.866 "data_offset": 2048, 00:18:23.866 "data_size": 63488 00:18:23.866 }, 00:18:23.866 { 00:18:23.866 "name": "BaseBdev3", 00:18:23.866 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:23.866 "is_configured": true, 00:18:23.866 "data_offset": 2048, 00:18:23.866 "data_size": 63488 00:18:23.866 } 00:18:23.866 ] 00:18:23.866 }' 00:18:23.866 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.866 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.866 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.124 "name": "raid_bdev1", 00:18:24.124 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:24.124 "strip_size_kb": 64, 00:18:24.124 "state": "online", 00:18:24.124 "raid_level": "raid5f", 00:18:24.124 "superblock": true, 00:18:24.124 "num_base_bdevs": 3, 00:18:24.124 "num_base_bdevs_discovered": 3, 00:18:24.124 "num_base_bdevs_operational": 3, 00:18:24.124 "base_bdevs_list": [ 00:18:24.124 { 00:18:24.124 "name": "spare", 00:18:24.124 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:24.124 "is_configured": true, 00:18:24.124 "data_offset": 2048, 00:18:24.124 "data_size": 63488 00:18:24.124 }, 00:18:24.124 { 00:18:24.124 "name": "BaseBdev2", 00:18:24.124 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:24.124 "is_configured": true, 00:18:24.124 "data_offset": 2048, 00:18:24.124 "data_size": 63488 00:18:24.124 }, 00:18:24.124 { 00:18:24.124 "name": "BaseBdev3", 00:18:24.124 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:24.124 "is_configured": true, 00:18:24.124 "data_offset": 2048, 00:18:24.124 "data_size": 63488 00:18:24.124 } 00:18:24.124 ] 00:18:24.124 }' 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.124 10:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.695 [2024-11-15 10:46:55.016321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.695 [2024-11-15 10:46:55.016381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.695 [2024-11-15 10:46:55.016496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.695 [2024-11-15 10:46:55.016603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.695 [2024-11-15 10:46:55.016627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.695 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:24.954 /dev/nbd0 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.954 1+0 records in 00:18:24.954 1+0 records out 00:18:24.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293971 s, 13.9 MB/s 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.954 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:25.521 /dev/nbd1 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:25.521 1+0 records in 00:18:25.521 1+0 records out 00:18:25.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421911 s, 9.7 MB/s 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:25.521 10:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:25.521 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:25.521 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.521 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.521 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.521 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:25.521 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.521 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:26.089 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:26.089 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:26.089 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:26.089 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.089 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.089 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:26.089 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:26.089 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.089 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.089 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.348 [2024-11-15 10:46:56.709137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:26.348 [2024-11-15 10:46:56.709214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.348 [2024-11-15 10:46:56.709243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:26.348 [2024-11-15 10:46:56.709260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.348 [2024-11-15 10:46:56.712045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.348 [2024-11-15 10:46:56.712099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:26.348 [2024-11-15 10:46:56.712212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:26.348 [2024-11-15 10:46:56.712281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.348 [2024-11-15 10:46:56.712492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.348 [2024-11-15 10:46:56.712651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.348 spare 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.348 [2024-11-15 10:46:56.812789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:26.348 [2024-11-15 10:46:56.812867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:26.348 [2024-11-15 10:46:56.813284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:18:26.348 [2024-11-15 10:46:56.818161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:26.348 [2024-11-15 10:46:56.818193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:26.348 [2024-11-15 10:46:56.818478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.348 "name": "raid_bdev1", 00:18:26.348 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:26.348 "strip_size_kb": 64, 00:18:26.348 "state": "online", 00:18:26.348 "raid_level": "raid5f", 00:18:26.348 "superblock": true, 00:18:26.348 "num_base_bdevs": 3, 00:18:26.348 "num_base_bdevs_discovered": 3, 00:18:26.348 "num_base_bdevs_operational": 3, 00:18:26.348 "base_bdevs_list": [ 00:18:26.348 { 00:18:26.348 "name": "spare", 00:18:26.348 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:26.348 "is_configured": true, 00:18:26.348 "data_offset": 2048, 00:18:26.348 "data_size": 63488 00:18:26.348 }, 00:18:26.348 { 00:18:26.348 "name": "BaseBdev2", 00:18:26.348 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:26.348 "is_configured": true, 00:18:26.348 "data_offset": 2048, 00:18:26.348 "data_size": 63488 00:18:26.348 }, 00:18:26.348 { 00:18:26.348 "name": "BaseBdev3", 00:18:26.348 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:26.348 "is_configured": true, 00:18:26.348 "data_offset": 2048, 00:18:26.348 "data_size": 63488 00:18:26.348 } 00:18:26.348 ] 00:18:26.348 }' 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.348 10:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.915 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.915 "name": "raid_bdev1", 00:18:26.915 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:26.915 "strip_size_kb": 64, 00:18:26.915 "state": "online", 00:18:26.915 "raid_level": "raid5f", 00:18:26.915 "superblock": true, 00:18:26.915 "num_base_bdevs": 3, 00:18:26.915 "num_base_bdevs_discovered": 3, 00:18:26.915 "num_base_bdevs_operational": 3, 00:18:26.915 "base_bdevs_list": [ 00:18:26.915 { 00:18:26.915 "name": "spare", 00:18:26.915 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:26.915 "is_configured": true, 00:18:26.915 "data_offset": 2048, 00:18:26.916 "data_size": 63488 00:18:26.916 }, 00:18:26.916 { 00:18:26.916 "name": "BaseBdev2", 00:18:26.916 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:26.916 "is_configured": true, 00:18:26.916 "data_offset": 2048, 00:18:26.916 "data_size": 63488 00:18:26.916 }, 00:18:26.916 { 00:18:26.916 "name": "BaseBdev3", 00:18:26.916 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:26.916 "is_configured": true, 00:18:26.916 "data_offset": 2048, 00:18:26.916 "data_size": 63488 00:18:26.916 } 00:18:26.916 ] 00:18:26.916 }' 00:18:26.916 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.916 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.916 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.174 [2024-11-15 10:46:57.571905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.174 "name": "raid_bdev1", 00:18:27.174 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:27.174 "strip_size_kb": 64, 00:18:27.174 "state": "online", 00:18:27.174 "raid_level": "raid5f", 00:18:27.174 "superblock": true, 00:18:27.174 "num_base_bdevs": 3, 00:18:27.174 "num_base_bdevs_discovered": 2, 00:18:27.174 "num_base_bdevs_operational": 2, 00:18:27.174 "base_bdevs_list": [ 00:18:27.174 { 00:18:27.174 "name": null, 00:18:27.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.174 "is_configured": false, 00:18:27.174 "data_offset": 0, 00:18:27.174 "data_size": 63488 00:18:27.174 }, 00:18:27.174 { 00:18:27.174 "name": "BaseBdev2", 00:18:27.174 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:27.174 "is_configured": true, 00:18:27.174 "data_offset": 2048, 00:18:27.174 "data_size": 63488 00:18:27.174 }, 00:18:27.174 { 00:18:27.174 "name": "BaseBdev3", 00:18:27.174 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:27.174 "is_configured": true, 00:18:27.174 "data_offset": 2048, 00:18:27.174 "data_size": 63488 00:18:27.174 } 00:18:27.174 ] 00:18:27.174 }' 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.174 10:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.741 10:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:27.741 10:46:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.741 10:46:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.741 [2024-11-15 10:46:58.092048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.741 [2024-11-15 10:46:58.092292] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:27.741 [2024-11-15 10:46:58.092321] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:27.742 [2024-11-15 10:46:58.092397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.742 [2024-11-15 10:46:58.105732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:18:27.742 10:46:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.742 10:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:27.742 [2024-11-15 10:46:58.112632] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.678 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.679 "name": "raid_bdev1", 00:18:28.679 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:28.679 "strip_size_kb": 64, 00:18:28.679 "state": "online", 00:18:28.679 "raid_level": "raid5f", 00:18:28.679 "superblock": true, 00:18:28.679 "num_base_bdevs": 3, 00:18:28.679 "num_base_bdevs_discovered": 3, 00:18:28.679 "num_base_bdevs_operational": 3, 00:18:28.679 "process": { 00:18:28.679 "type": "rebuild", 00:18:28.679 "target": "spare", 00:18:28.679 "progress": { 00:18:28.679 "blocks": 18432, 00:18:28.679 "percent": 14 00:18:28.679 } 00:18:28.679 }, 00:18:28.679 "base_bdevs_list": [ 00:18:28.679 { 00:18:28.679 "name": "spare", 00:18:28.679 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:28.679 "is_configured": true, 00:18:28.679 "data_offset": 2048, 00:18:28.679 "data_size": 63488 00:18:28.679 }, 00:18:28.679 { 00:18:28.679 "name": "BaseBdev2", 00:18:28.679 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:28.679 "is_configured": true, 00:18:28.679 "data_offset": 2048, 00:18:28.679 "data_size": 63488 00:18:28.679 }, 00:18:28.679 { 00:18:28.679 "name": "BaseBdev3", 00:18:28.679 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:28.679 "is_configured": true, 00:18:28.679 "data_offset": 2048, 00:18:28.679 "data_size": 63488 00:18:28.679 } 00:18:28.679 ] 00:18:28.679 }' 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.679 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.937 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.937 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:28.937 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.937 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.937 [2024-11-15 10:46:59.302254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.937 [2024-11-15 10:46:59.326437] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:28.937 [2024-11-15 10:46:59.326550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.937 [2024-11-15 10:46:59.326576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.938 [2024-11-15 10:46:59.326591] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.938 "name": "raid_bdev1", 00:18:28.938 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:28.938 "strip_size_kb": 64, 00:18:28.938 "state": "online", 00:18:28.938 "raid_level": "raid5f", 00:18:28.938 "superblock": true, 00:18:28.938 "num_base_bdevs": 3, 00:18:28.938 "num_base_bdevs_discovered": 2, 00:18:28.938 "num_base_bdevs_operational": 2, 00:18:28.938 "base_bdevs_list": [ 00:18:28.938 { 00:18:28.938 "name": null, 00:18:28.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.938 "is_configured": false, 00:18:28.938 "data_offset": 0, 00:18:28.938 "data_size": 63488 00:18:28.938 }, 00:18:28.938 { 00:18:28.938 "name": "BaseBdev2", 00:18:28.938 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:28.938 "is_configured": true, 00:18:28.938 "data_offset": 2048, 00:18:28.938 "data_size": 63488 00:18:28.938 }, 00:18:28.938 { 00:18:28.938 "name": "BaseBdev3", 00:18:28.938 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:28.938 "is_configured": true, 00:18:28.938 "data_offset": 2048, 00:18:28.938 "data_size": 63488 00:18:28.938 } 00:18:28.938 ] 00:18:28.938 }' 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.938 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.505 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:29.505 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.505 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.505 [2024-11-15 10:46:59.884271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:29.505 [2024-11-15 10:46:59.884372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.505 [2024-11-15 10:46:59.884406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:18:29.505 [2024-11-15 10:46:59.884427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.505 [2024-11-15 10:46:59.885011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.505 [2024-11-15 10:46:59.885088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:29.505 [2024-11-15 10:46:59.885217] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:29.505 [2024-11-15 10:46:59.885243] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:29.505 [2024-11-15 10:46:59.885258] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:29.505 [2024-11-15 10:46:59.885293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.505 [2024-11-15 10:46:59.898541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:18:29.505 spare 00:18:29.505 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.505 10:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:29.505 [2024-11-15 10:46:59.905451] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.438 "name": "raid_bdev1", 00:18:30.438 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:30.438 "strip_size_kb": 64, 00:18:30.438 "state": "online", 00:18:30.438 "raid_level": "raid5f", 00:18:30.438 "superblock": true, 00:18:30.438 "num_base_bdevs": 3, 00:18:30.438 "num_base_bdevs_discovered": 3, 00:18:30.438 "num_base_bdevs_operational": 3, 00:18:30.438 "process": { 00:18:30.438 "type": "rebuild", 00:18:30.438 "target": "spare", 00:18:30.438 "progress": { 00:18:30.438 "blocks": 18432, 00:18:30.438 "percent": 14 00:18:30.438 } 00:18:30.438 }, 00:18:30.438 "base_bdevs_list": [ 00:18:30.438 { 00:18:30.438 "name": "spare", 00:18:30.438 "uuid": "eae3c9db-04d2-5d3c-a160-1871e4c7d17e", 00:18:30.438 "is_configured": true, 00:18:30.438 "data_offset": 2048, 00:18:30.438 "data_size": 63488 00:18:30.438 }, 00:18:30.438 { 00:18:30.438 "name": "BaseBdev2", 00:18:30.438 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:30.438 "is_configured": true, 00:18:30.438 "data_offset": 2048, 00:18:30.438 "data_size": 63488 00:18:30.438 }, 00:18:30.438 { 00:18:30.438 "name": "BaseBdev3", 00:18:30.438 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:30.438 "is_configured": true, 00:18:30.438 "data_offset": 2048, 00:18:30.438 "data_size": 63488 00:18:30.438 } 00:18:30.438 ] 00:18:30.438 }' 00:18:30.438 10:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.697 [2024-11-15 10:47:01.062940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.697 [2024-11-15 10:47:01.119191] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:30.697 [2024-11-15 10:47:01.119531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.697 [2024-11-15 10:47:01.119571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.697 [2024-11-15 10:47:01.119586] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.697 "name": "raid_bdev1", 00:18:30.697 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:30.697 "strip_size_kb": 64, 00:18:30.697 "state": "online", 00:18:30.697 "raid_level": "raid5f", 00:18:30.697 "superblock": true, 00:18:30.697 "num_base_bdevs": 3, 00:18:30.697 "num_base_bdevs_discovered": 2, 00:18:30.697 "num_base_bdevs_operational": 2, 00:18:30.697 "base_bdevs_list": [ 00:18:30.697 { 00:18:30.697 "name": null, 00:18:30.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.697 "is_configured": false, 00:18:30.697 "data_offset": 0, 00:18:30.697 "data_size": 63488 00:18:30.697 }, 00:18:30.697 { 00:18:30.697 "name": "BaseBdev2", 00:18:30.697 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:30.697 "is_configured": true, 00:18:30.697 "data_offset": 2048, 00:18:30.697 "data_size": 63488 00:18:30.697 }, 00:18:30.697 { 00:18:30.697 "name": "BaseBdev3", 00:18:30.697 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:30.697 "is_configured": true, 00:18:30.697 "data_offset": 2048, 00:18:30.697 "data_size": 63488 00:18:30.697 } 00:18:30.697 ] 00:18:30.697 }' 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.697 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.265 "name": "raid_bdev1", 00:18:31.265 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:31.265 "strip_size_kb": 64, 00:18:31.265 "state": "online", 00:18:31.265 "raid_level": "raid5f", 00:18:31.265 "superblock": true, 00:18:31.265 "num_base_bdevs": 3, 00:18:31.265 "num_base_bdevs_discovered": 2, 00:18:31.265 "num_base_bdevs_operational": 2, 00:18:31.265 "base_bdevs_list": [ 00:18:31.265 { 00:18:31.265 "name": null, 00:18:31.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.265 "is_configured": false, 00:18:31.265 "data_offset": 0, 00:18:31.265 "data_size": 63488 00:18:31.265 }, 00:18:31.265 { 00:18:31.265 "name": "BaseBdev2", 00:18:31.265 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:31.265 "is_configured": true, 00:18:31.265 "data_offset": 2048, 00:18:31.265 "data_size": 63488 00:18:31.265 }, 00:18:31.265 { 00:18:31.265 "name": "BaseBdev3", 00:18:31.265 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:31.265 "is_configured": true, 00:18:31.265 "data_offset": 2048, 00:18:31.265 "data_size": 63488 00:18:31.265 } 00:18:31.265 ] 00:18:31.265 }' 00:18:31.265 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.524 [2024-11-15 10:47:01.893458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:31.524 [2024-11-15 10:47:01.893531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.524 [2024-11-15 10:47:01.893568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:31.524 [2024-11-15 10:47:01.893583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.524 [2024-11-15 10:47:01.894139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.524 [2024-11-15 10:47:01.894174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:31.524 [2024-11-15 10:47:01.894293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:31.524 [2024-11-15 10:47:01.894323] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:31.524 [2024-11-15 10:47:01.894373] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:31.524 [2024-11-15 10:47:01.894388] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:31.524 BaseBdev1 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.524 10:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.460 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.460 "name": "raid_bdev1", 00:18:32.460 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:32.460 "strip_size_kb": 64, 00:18:32.460 "state": "online", 00:18:32.460 "raid_level": "raid5f", 00:18:32.460 "superblock": true, 00:18:32.460 "num_base_bdevs": 3, 00:18:32.460 "num_base_bdevs_discovered": 2, 00:18:32.460 "num_base_bdevs_operational": 2, 00:18:32.460 "base_bdevs_list": [ 00:18:32.460 { 00:18:32.460 "name": null, 00:18:32.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.460 "is_configured": false, 00:18:32.460 "data_offset": 0, 00:18:32.460 "data_size": 63488 00:18:32.460 }, 00:18:32.460 { 00:18:32.460 "name": "BaseBdev2", 00:18:32.460 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:32.460 "is_configured": true, 00:18:32.460 "data_offset": 2048, 00:18:32.460 "data_size": 63488 00:18:32.460 }, 00:18:32.461 { 00:18:32.461 "name": "BaseBdev3", 00:18:32.461 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:32.461 "is_configured": true, 00:18:32.461 "data_offset": 2048, 00:18:32.461 "data_size": 63488 00:18:32.461 } 00:18:32.461 ] 00:18:32.461 }' 00:18:32.461 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.461 10:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.027 "name": "raid_bdev1", 00:18:33.027 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:33.027 "strip_size_kb": 64, 00:18:33.027 "state": "online", 00:18:33.027 "raid_level": "raid5f", 00:18:33.027 "superblock": true, 00:18:33.027 "num_base_bdevs": 3, 00:18:33.027 "num_base_bdevs_discovered": 2, 00:18:33.027 "num_base_bdevs_operational": 2, 00:18:33.027 "base_bdevs_list": [ 00:18:33.027 { 00:18:33.027 "name": null, 00:18:33.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.027 "is_configured": false, 00:18:33.027 "data_offset": 0, 00:18:33.027 "data_size": 63488 00:18:33.027 }, 00:18:33.027 { 00:18:33.027 "name": "BaseBdev2", 00:18:33.027 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:33.027 "is_configured": true, 00:18:33.027 "data_offset": 2048, 00:18:33.027 "data_size": 63488 00:18:33.027 }, 00:18:33.027 { 00:18:33.027 "name": "BaseBdev3", 00:18:33.027 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:33.027 "is_configured": true, 00:18:33.027 "data_offset": 2048, 00:18:33.027 "data_size": 63488 00:18:33.027 } 00:18:33.027 ] 00:18:33.027 }' 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.027 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.285 [2024-11-15 10:47:03.602057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.285 [2024-11-15 10:47:03.602281] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:33.285 [2024-11-15 10:47:03.602314] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:33.285 request: 00:18:33.285 { 00:18:33.285 "base_bdev": "BaseBdev1", 00:18:33.285 "raid_bdev": "raid_bdev1", 00:18:33.285 "method": "bdev_raid_add_base_bdev", 00:18:33.285 "req_id": 1 00:18:33.285 } 00:18:33.285 Got JSON-RPC error response 00:18:33.285 response: 00:18:33.285 { 00:18:33.285 "code": -22, 00:18:33.285 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:33.285 } 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.285 10:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.219 "name": "raid_bdev1", 00:18:34.219 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:34.219 "strip_size_kb": 64, 00:18:34.219 "state": "online", 00:18:34.219 "raid_level": "raid5f", 00:18:34.219 "superblock": true, 00:18:34.219 "num_base_bdevs": 3, 00:18:34.219 "num_base_bdevs_discovered": 2, 00:18:34.219 "num_base_bdevs_operational": 2, 00:18:34.219 "base_bdevs_list": [ 00:18:34.219 { 00:18:34.219 "name": null, 00:18:34.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.219 "is_configured": false, 00:18:34.219 "data_offset": 0, 00:18:34.219 "data_size": 63488 00:18:34.219 }, 00:18:34.219 { 00:18:34.219 "name": "BaseBdev2", 00:18:34.219 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:34.219 "is_configured": true, 00:18:34.219 "data_offset": 2048, 00:18:34.219 "data_size": 63488 00:18:34.219 }, 00:18:34.219 { 00:18:34.219 "name": "BaseBdev3", 00:18:34.219 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:34.219 "is_configured": true, 00:18:34.219 "data_offset": 2048, 00:18:34.219 "data_size": 63488 00:18:34.219 } 00:18:34.219 ] 00:18:34.219 }' 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.219 10:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.784 "name": "raid_bdev1", 00:18:34.784 "uuid": "9fc70d7f-ff4f-40aa-85a2-de022a1f60a1", 00:18:34.784 "strip_size_kb": 64, 00:18:34.784 "state": "online", 00:18:34.784 "raid_level": "raid5f", 00:18:34.784 "superblock": true, 00:18:34.784 "num_base_bdevs": 3, 00:18:34.784 "num_base_bdevs_discovered": 2, 00:18:34.784 "num_base_bdevs_operational": 2, 00:18:34.784 "base_bdevs_list": [ 00:18:34.784 { 00:18:34.784 "name": null, 00:18:34.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.784 "is_configured": false, 00:18:34.784 "data_offset": 0, 00:18:34.784 "data_size": 63488 00:18:34.784 }, 00:18:34.784 { 00:18:34.784 "name": "BaseBdev2", 00:18:34.784 "uuid": "c5100a2f-1238-5863-bbcf-ef78f33ffa93", 00:18:34.784 "is_configured": true, 00:18:34.784 "data_offset": 2048, 00:18:34.784 "data_size": 63488 00:18:34.784 }, 00:18:34.784 { 00:18:34.784 "name": "BaseBdev3", 00:18:34.784 "uuid": "0a20b748-28b0-5d58-bd75-1f9984ff2e1b", 00:18:34.784 "is_configured": true, 00:18:34.784 "data_offset": 2048, 00:18:34.784 "data_size": 63488 00:18:34.784 } 00:18:34.784 ] 00:18:34.784 }' 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82545 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82545 ']' 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82545 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:34.784 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82545 00:18:35.043 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:35.043 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:35.043 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82545' 00:18:35.043 killing process with pid 82545 00:18:35.043 Received shutdown signal, test time was about 60.000000 seconds 00:18:35.043 00:18:35.043 Latency(us) 00:18:35.043 [2024-11-15T10:47:05.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.043 [2024-11-15T10:47:05.603Z] =================================================================================================================== 00:18:35.043 [2024-11-15T10:47:05.603Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.043 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82545 00:18:35.043 [2024-11-15 10:47:05.344404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.043 10:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82545 00:18:35.043 [2024-11-15 10:47:05.344563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.043 [2024-11-15 10:47:05.344649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.043 [2024-11-15 10:47:05.344670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:35.318 [2024-11-15 10:47:05.722366] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:36.253 10:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:36.253 00:18:36.253 real 0m25.407s 00:18:36.253 user 0m34.267s 00:18:36.253 sys 0m2.561s 00:18:36.253 10:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:36.253 10:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.253 ************************************ 00:18:36.253 END TEST raid5f_rebuild_test_sb 00:18:36.253 ************************************ 00:18:36.512 10:47:06 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:36.512 10:47:06 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:18:36.512 10:47:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:36.512 10:47:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:36.512 10:47:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.512 ************************************ 00:18:36.512 START TEST raid5f_state_function_test 00:18:36.512 ************************************ 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.512 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83311 00:18:36.513 Process raid pid: 83311 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83311' 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83311 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83311 ']' 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.513 10:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.513 [2024-11-15 10:47:06.939003] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:18:36.513 [2024-11-15 10:47:06.939172] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.772 [2024-11-15 10:47:07.125064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.772 [2024-11-15 10:47:07.261527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.030 [2024-11-15 10:47:07.485750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.030 [2024-11-15 10:47:07.485813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.597 10:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.597 10:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:18:37.597 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:37.597 10:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.597 10:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.597 [2024-11-15 10:47:07.884189] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:37.597 [2024-11-15 10:47:07.884261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:37.597 [2024-11-15 10:47:07.884278] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:37.597 [2024-11-15 10:47:07.884295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:37.597 [2024-11-15 10:47:07.884305] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:37.597 [2024-11-15 10:47:07.884318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:37.597 [2024-11-15 10:47:07.884328] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:37.597 [2024-11-15 10:47:07.884341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:37.597 10:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.597 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:37.597 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.598 "name": "Existed_Raid", 00:18:37.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.598 "strip_size_kb": 64, 00:18:37.598 "state": "configuring", 00:18:37.598 "raid_level": "raid5f", 00:18:37.598 "superblock": false, 00:18:37.598 "num_base_bdevs": 4, 00:18:37.598 "num_base_bdevs_discovered": 0, 00:18:37.598 "num_base_bdevs_operational": 4, 00:18:37.598 "base_bdevs_list": [ 00:18:37.598 { 00:18:37.598 "name": "BaseBdev1", 00:18:37.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.598 "is_configured": false, 00:18:37.598 "data_offset": 0, 00:18:37.598 "data_size": 0 00:18:37.598 }, 00:18:37.598 { 00:18:37.598 "name": "BaseBdev2", 00:18:37.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.598 "is_configured": false, 00:18:37.598 "data_offset": 0, 00:18:37.598 "data_size": 0 00:18:37.598 }, 00:18:37.598 { 00:18:37.598 "name": "BaseBdev3", 00:18:37.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.598 "is_configured": false, 00:18:37.598 "data_offset": 0, 00:18:37.598 "data_size": 0 00:18:37.598 }, 00:18:37.598 { 00:18:37.598 "name": "BaseBdev4", 00:18:37.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.598 "is_configured": false, 00:18:37.598 "data_offset": 0, 00:18:37.598 "data_size": 0 00:18:37.598 } 00:18:37.598 ] 00:18:37.598 }' 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.598 10:47:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.857 [2024-11-15 10:47:08.400259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:37.857 [2024-11-15 10:47:08.400311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.857 [2024-11-15 10:47:08.408254] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:37.857 [2024-11-15 10:47:08.408316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:37.857 [2024-11-15 10:47:08.408333] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:37.857 [2024-11-15 10:47:08.408368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:37.857 [2024-11-15 10:47:08.408381] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:37.857 [2024-11-15 10:47:08.408396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:37.857 [2024-11-15 10:47:08.408405] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:37.857 [2024-11-15 10:47:08.408419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.857 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.115 [2024-11-15 10:47:08.450126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:38.115 BaseBdev1 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.115 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.115 [ 00:18:38.115 { 00:18:38.115 "name": "BaseBdev1", 00:18:38.115 "aliases": [ 00:18:38.115 "966b3a80-5660-4f70-9822-8ea371bc6752" 00:18:38.115 ], 00:18:38.115 "product_name": "Malloc disk", 00:18:38.115 "block_size": 512, 00:18:38.115 "num_blocks": 65536, 00:18:38.115 "uuid": "966b3a80-5660-4f70-9822-8ea371bc6752", 00:18:38.115 "assigned_rate_limits": { 00:18:38.115 "rw_ios_per_sec": 0, 00:18:38.115 "rw_mbytes_per_sec": 0, 00:18:38.115 "r_mbytes_per_sec": 0, 00:18:38.115 "w_mbytes_per_sec": 0 00:18:38.115 }, 00:18:38.115 "claimed": true, 00:18:38.115 "claim_type": "exclusive_write", 00:18:38.115 "zoned": false, 00:18:38.115 "supported_io_types": { 00:18:38.115 "read": true, 00:18:38.115 "write": true, 00:18:38.115 "unmap": true, 00:18:38.115 "flush": true, 00:18:38.115 "reset": true, 00:18:38.115 "nvme_admin": false, 00:18:38.115 "nvme_io": false, 00:18:38.115 "nvme_io_md": false, 00:18:38.115 "write_zeroes": true, 00:18:38.115 "zcopy": true, 00:18:38.115 "get_zone_info": false, 00:18:38.115 "zone_management": false, 00:18:38.116 "zone_append": false, 00:18:38.116 "compare": false, 00:18:38.116 "compare_and_write": false, 00:18:38.116 "abort": true, 00:18:38.116 "seek_hole": false, 00:18:38.116 "seek_data": false, 00:18:38.116 "copy": true, 00:18:38.116 "nvme_iov_md": false 00:18:38.116 }, 00:18:38.116 "memory_domains": [ 00:18:38.116 { 00:18:38.116 "dma_device_id": "system", 00:18:38.116 "dma_device_type": 1 00:18:38.116 }, 00:18:38.116 { 00:18:38.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.116 "dma_device_type": 2 00:18:38.116 } 00:18:38.116 ], 00:18:38.116 "driver_specific": {} 00:18:38.116 } 00:18:38.116 ] 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.116 "name": "Existed_Raid", 00:18:38.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.116 "strip_size_kb": 64, 00:18:38.116 "state": "configuring", 00:18:38.116 "raid_level": "raid5f", 00:18:38.116 "superblock": false, 00:18:38.116 "num_base_bdevs": 4, 00:18:38.116 "num_base_bdevs_discovered": 1, 00:18:38.116 "num_base_bdevs_operational": 4, 00:18:38.116 "base_bdevs_list": [ 00:18:38.116 { 00:18:38.116 "name": "BaseBdev1", 00:18:38.116 "uuid": "966b3a80-5660-4f70-9822-8ea371bc6752", 00:18:38.116 "is_configured": true, 00:18:38.116 "data_offset": 0, 00:18:38.116 "data_size": 65536 00:18:38.116 }, 00:18:38.116 { 00:18:38.116 "name": "BaseBdev2", 00:18:38.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.116 "is_configured": false, 00:18:38.116 "data_offset": 0, 00:18:38.116 "data_size": 0 00:18:38.116 }, 00:18:38.116 { 00:18:38.116 "name": "BaseBdev3", 00:18:38.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.116 "is_configured": false, 00:18:38.116 "data_offset": 0, 00:18:38.116 "data_size": 0 00:18:38.116 }, 00:18:38.116 { 00:18:38.116 "name": "BaseBdev4", 00:18:38.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.116 "is_configured": false, 00:18:38.116 "data_offset": 0, 00:18:38.116 "data_size": 0 00:18:38.116 } 00:18:38.116 ] 00:18:38.116 }' 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.116 10:47:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.682 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:38.682 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.682 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.682 [2024-11-15 10:47:09.030378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:38.682 [2024-11-15 10:47:09.030451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:38.682 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.682 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:38.682 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.682 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.682 [2024-11-15 10:47:09.038455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:38.682 [2024-11-15 10:47:09.040792] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.682 [2024-11-15 10:47:09.040853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.683 [2024-11-15 10:47:09.040870] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:38.683 [2024-11-15 10:47:09.040887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:38.683 [2024-11-15 10:47:09.040898] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:38.683 [2024-11-15 10:47:09.040912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.683 "name": "Existed_Raid", 00:18:38.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.683 "strip_size_kb": 64, 00:18:38.683 "state": "configuring", 00:18:38.683 "raid_level": "raid5f", 00:18:38.683 "superblock": false, 00:18:38.683 "num_base_bdevs": 4, 00:18:38.683 "num_base_bdevs_discovered": 1, 00:18:38.683 "num_base_bdevs_operational": 4, 00:18:38.683 "base_bdevs_list": [ 00:18:38.683 { 00:18:38.683 "name": "BaseBdev1", 00:18:38.683 "uuid": "966b3a80-5660-4f70-9822-8ea371bc6752", 00:18:38.683 "is_configured": true, 00:18:38.683 "data_offset": 0, 00:18:38.683 "data_size": 65536 00:18:38.683 }, 00:18:38.683 { 00:18:38.683 "name": "BaseBdev2", 00:18:38.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.683 "is_configured": false, 00:18:38.683 "data_offset": 0, 00:18:38.683 "data_size": 0 00:18:38.683 }, 00:18:38.683 { 00:18:38.683 "name": "BaseBdev3", 00:18:38.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.683 "is_configured": false, 00:18:38.683 "data_offset": 0, 00:18:38.683 "data_size": 0 00:18:38.683 }, 00:18:38.683 { 00:18:38.683 "name": "BaseBdev4", 00:18:38.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.683 "is_configured": false, 00:18:38.683 "data_offset": 0, 00:18:38.683 "data_size": 0 00:18:38.683 } 00:18:38.683 ] 00:18:38.683 }' 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.683 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.250 [2024-11-15 10:47:09.629777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.250 BaseBdev2 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.250 [ 00:18:39.250 { 00:18:39.250 "name": "BaseBdev2", 00:18:39.250 "aliases": [ 00:18:39.250 "f87cb922-0be7-481f-bdb3-1ad3e45bdb9b" 00:18:39.250 ], 00:18:39.250 "product_name": "Malloc disk", 00:18:39.250 "block_size": 512, 00:18:39.250 "num_blocks": 65536, 00:18:39.250 "uuid": "f87cb922-0be7-481f-bdb3-1ad3e45bdb9b", 00:18:39.250 "assigned_rate_limits": { 00:18:39.250 "rw_ios_per_sec": 0, 00:18:39.250 "rw_mbytes_per_sec": 0, 00:18:39.250 "r_mbytes_per_sec": 0, 00:18:39.250 "w_mbytes_per_sec": 0 00:18:39.250 }, 00:18:39.250 "claimed": true, 00:18:39.250 "claim_type": "exclusive_write", 00:18:39.250 "zoned": false, 00:18:39.250 "supported_io_types": { 00:18:39.250 "read": true, 00:18:39.250 "write": true, 00:18:39.250 "unmap": true, 00:18:39.250 "flush": true, 00:18:39.250 "reset": true, 00:18:39.250 "nvme_admin": false, 00:18:39.250 "nvme_io": false, 00:18:39.250 "nvme_io_md": false, 00:18:39.250 "write_zeroes": true, 00:18:39.250 "zcopy": true, 00:18:39.250 "get_zone_info": false, 00:18:39.250 "zone_management": false, 00:18:39.250 "zone_append": false, 00:18:39.250 "compare": false, 00:18:39.250 "compare_and_write": false, 00:18:39.250 "abort": true, 00:18:39.250 "seek_hole": false, 00:18:39.250 "seek_data": false, 00:18:39.250 "copy": true, 00:18:39.250 "nvme_iov_md": false 00:18:39.250 }, 00:18:39.250 "memory_domains": [ 00:18:39.250 { 00:18:39.250 "dma_device_id": "system", 00:18:39.250 "dma_device_type": 1 00:18:39.250 }, 00:18:39.250 { 00:18:39.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.250 "dma_device_type": 2 00:18:39.250 } 00:18:39.250 ], 00:18:39.250 "driver_specific": {} 00:18:39.250 } 00:18:39.250 ] 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.250 "name": "Existed_Raid", 00:18:39.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.250 "strip_size_kb": 64, 00:18:39.250 "state": "configuring", 00:18:39.250 "raid_level": "raid5f", 00:18:39.250 "superblock": false, 00:18:39.250 "num_base_bdevs": 4, 00:18:39.250 "num_base_bdevs_discovered": 2, 00:18:39.250 "num_base_bdevs_operational": 4, 00:18:39.250 "base_bdevs_list": [ 00:18:39.250 { 00:18:39.250 "name": "BaseBdev1", 00:18:39.250 "uuid": "966b3a80-5660-4f70-9822-8ea371bc6752", 00:18:39.250 "is_configured": true, 00:18:39.250 "data_offset": 0, 00:18:39.250 "data_size": 65536 00:18:39.250 }, 00:18:39.250 { 00:18:39.250 "name": "BaseBdev2", 00:18:39.250 "uuid": "f87cb922-0be7-481f-bdb3-1ad3e45bdb9b", 00:18:39.250 "is_configured": true, 00:18:39.250 "data_offset": 0, 00:18:39.250 "data_size": 65536 00:18:39.250 }, 00:18:39.250 { 00:18:39.250 "name": "BaseBdev3", 00:18:39.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.250 "is_configured": false, 00:18:39.250 "data_offset": 0, 00:18:39.250 "data_size": 0 00:18:39.250 }, 00:18:39.250 { 00:18:39.250 "name": "BaseBdev4", 00:18:39.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.250 "is_configured": false, 00:18:39.250 "data_offset": 0, 00:18:39.250 "data_size": 0 00:18:39.250 } 00:18:39.250 ] 00:18:39.250 }' 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.250 10:47:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.818 [2024-11-15 10:47:10.346634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:39.818 BaseBdev3 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.818 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.818 [ 00:18:39.818 { 00:18:39.818 "name": "BaseBdev3", 00:18:39.818 "aliases": [ 00:18:39.818 "b7ff02fc-bca1-457a-922b-2481162261a2" 00:18:39.818 ], 00:18:39.818 "product_name": "Malloc disk", 00:18:39.818 "block_size": 512, 00:18:39.818 "num_blocks": 65536, 00:18:39.818 "uuid": "b7ff02fc-bca1-457a-922b-2481162261a2", 00:18:39.818 "assigned_rate_limits": { 00:18:39.818 "rw_ios_per_sec": 0, 00:18:39.818 "rw_mbytes_per_sec": 0, 00:18:39.818 "r_mbytes_per_sec": 0, 00:18:39.818 "w_mbytes_per_sec": 0 00:18:39.818 }, 00:18:39.818 "claimed": true, 00:18:39.818 "claim_type": "exclusive_write", 00:18:39.818 "zoned": false, 00:18:39.818 "supported_io_types": { 00:18:39.818 "read": true, 00:18:39.818 "write": true, 00:18:39.818 "unmap": true, 00:18:39.818 "flush": true, 00:18:39.818 "reset": true, 00:18:39.818 "nvme_admin": false, 00:18:39.818 "nvme_io": false, 00:18:39.818 "nvme_io_md": false, 00:18:39.818 "write_zeroes": true, 00:18:39.818 "zcopy": true, 00:18:39.818 "get_zone_info": false, 00:18:39.818 "zone_management": false, 00:18:39.818 "zone_append": false, 00:18:39.818 "compare": false, 00:18:39.818 "compare_and_write": false, 00:18:39.818 "abort": true, 00:18:39.818 "seek_hole": false, 00:18:39.818 "seek_data": false, 00:18:39.818 "copy": true, 00:18:39.818 "nvme_iov_md": false 00:18:39.818 }, 00:18:39.818 "memory_domains": [ 00:18:39.818 { 00:18:39.818 "dma_device_id": "system", 00:18:39.818 "dma_device_type": 1 00:18:39.818 }, 00:18:40.076 { 00:18:40.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.076 "dma_device_type": 2 00:18:40.076 } 00:18:40.076 ], 00:18:40.076 "driver_specific": {} 00:18:40.076 } 00:18:40.076 ] 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.076 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.076 "name": "Existed_Raid", 00:18:40.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.076 "strip_size_kb": 64, 00:18:40.076 "state": "configuring", 00:18:40.076 "raid_level": "raid5f", 00:18:40.076 "superblock": false, 00:18:40.076 "num_base_bdevs": 4, 00:18:40.076 "num_base_bdevs_discovered": 3, 00:18:40.076 "num_base_bdevs_operational": 4, 00:18:40.077 "base_bdevs_list": [ 00:18:40.077 { 00:18:40.077 "name": "BaseBdev1", 00:18:40.077 "uuid": "966b3a80-5660-4f70-9822-8ea371bc6752", 00:18:40.077 "is_configured": true, 00:18:40.077 "data_offset": 0, 00:18:40.077 "data_size": 65536 00:18:40.077 }, 00:18:40.077 { 00:18:40.077 "name": "BaseBdev2", 00:18:40.077 "uuid": "f87cb922-0be7-481f-bdb3-1ad3e45bdb9b", 00:18:40.077 "is_configured": true, 00:18:40.077 "data_offset": 0, 00:18:40.077 "data_size": 65536 00:18:40.077 }, 00:18:40.077 { 00:18:40.077 "name": "BaseBdev3", 00:18:40.077 "uuid": "b7ff02fc-bca1-457a-922b-2481162261a2", 00:18:40.077 "is_configured": true, 00:18:40.077 "data_offset": 0, 00:18:40.077 "data_size": 65536 00:18:40.077 }, 00:18:40.077 { 00:18:40.077 "name": "BaseBdev4", 00:18:40.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.077 "is_configured": false, 00:18:40.077 "data_offset": 0, 00:18:40.077 "data_size": 0 00:18:40.077 } 00:18:40.077 ] 00:18:40.077 }' 00:18:40.077 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.077 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.705 [2024-11-15 10:47:10.941720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:40.705 [2024-11-15 10:47:10.941823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:40.705 [2024-11-15 10:47:10.941839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:40.705 [2024-11-15 10:47:10.942197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:40.705 [2024-11-15 10:47:10.949335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:40.705 [2024-11-15 10:47:10.949402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:40.705 [2024-11-15 10:47:10.949815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.705 BaseBdev4 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.705 [ 00:18:40.705 { 00:18:40.705 "name": "BaseBdev4", 00:18:40.705 "aliases": [ 00:18:40.705 "27acb9cd-7df8-4339-b123-456505acd327" 00:18:40.705 ], 00:18:40.705 "product_name": "Malloc disk", 00:18:40.705 "block_size": 512, 00:18:40.705 "num_blocks": 65536, 00:18:40.705 "uuid": "27acb9cd-7df8-4339-b123-456505acd327", 00:18:40.705 "assigned_rate_limits": { 00:18:40.705 "rw_ios_per_sec": 0, 00:18:40.705 "rw_mbytes_per_sec": 0, 00:18:40.705 "r_mbytes_per_sec": 0, 00:18:40.705 "w_mbytes_per_sec": 0 00:18:40.705 }, 00:18:40.705 "claimed": true, 00:18:40.705 "claim_type": "exclusive_write", 00:18:40.705 "zoned": false, 00:18:40.705 "supported_io_types": { 00:18:40.705 "read": true, 00:18:40.705 "write": true, 00:18:40.705 "unmap": true, 00:18:40.705 "flush": true, 00:18:40.705 "reset": true, 00:18:40.705 "nvme_admin": false, 00:18:40.705 "nvme_io": false, 00:18:40.705 "nvme_io_md": false, 00:18:40.705 "write_zeroes": true, 00:18:40.705 "zcopy": true, 00:18:40.705 "get_zone_info": false, 00:18:40.705 "zone_management": false, 00:18:40.705 "zone_append": false, 00:18:40.705 "compare": false, 00:18:40.705 "compare_and_write": false, 00:18:40.705 "abort": true, 00:18:40.705 "seek_hole": false, 00:18:40.705 "seek_data": false, 00:18:40.705 "copy": true, 00:18:40.705 "nvme_iov_md": false 00:18:40.705 }, 00:18:40.705 "memory_domains": [ 00:18:40.705 { 00:18:40.705 "dma_device_id": "system", 00:18:40.705 "dma_device_type": 1 00:18:40.705 }, 00:18:40.705 { 00:18:40.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.705 "dma_device_type": 2 00:18:40.705 } 00:18:40.705 ], 00:18:40.705 "driver_specific": {} 00:18:40.705 } 00:18:40.705 ] 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:40.705 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.706 10:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.706 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.706 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.706 "name": "Existed_Raid", 00:18:40.706 "uuid": "2ebd010a-1459-4469-8e41-37407cb5f70e", 00:18:40.706 "strip_size_kb": 64, 00:18:40.706 "state": "online", 00:18:40.706 "raid_level": "raid5f", 00:18:40.706 "superblock": false, 00:18:40.706 "num_base_bdevs": 4, 00:18:40.706 "num_base_bdevs_discovered": 4, 00:18:40.706 "num_base_bdevs_operational": 4, 00:18:40.706 "base_bdevs_list": [ 00:18:40.706 { 00:18:40.706 "name": "BaseBdev1", 00:18:40.706 "uuid": "966b3a80-5660-4f70-9822-8ea371bc6752", 00:18:40.706 "is_configured": true, 00:18:40.706 "data_offset": 0, 00:18:40.706 "data_size": 65536 00:18:40.706 }, 00:18:40.706 { 00:18:40.706 "name": "BaseBdev2", 00:18:40.706 "uuid": "f87cb922-0be7-481f-bdb3-1ad3e45bdb9b", 00:18:40.706 "is_configured": true, 00:18:40.706 "data_offset": 0, 00:18:40.706 "data_size": 65536 00:18:40.706 }, 00:18:40.706 { 00:18:40.706 "name": "BaseBdev3", 00:18:40.706 "uuid": "b7ff02fc-bca1-457a-922b-2481162261a2", 00:18:40.706 "is_configured": true, 00:18:40.706 "data_offset": 0, 00:18:40.706 "data_size": 65536 00:18:40.706 }, 00:18:40.706 { 00:18:40.706 "name": "BaseBdev4", 00:18:40.706 "uuid": "27acb9cd-7df8-4339-b123-456505acd327", 00:18:40.706 "is_configured": true, 00:18:40.706 "data_offset": 0, 00:18:40.706 "data_size": 65536 00:18:40.706 } 00:18:40.706 ] 00:18:40.706 }' 00:18:40.706 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.706 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.273 [2024-11-15 10:47:11.585260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:41.273 "name": "Existed_Raid", 00:18:41.273 "aliases": [ 00:18:41.273 "2ebd010a-1459-4469-8e41-37407cb5f70e" 00:18:41.273 ], 00:18:41.273 "product_name": "Raid Volume", 00:18:41.273 "block_size": 512, 00:18:41.273 "num_blocks": 196608, 00:18:41.273 "uuid": "2ebd010a-1459-4469-8e41-37407cb5f70e", 00:18:41.273 "assigned_rate_limits": { 00:18:41.273 "rw_ios_per_sec": 0, 00:18:41.273 "rw_mbytes_per_sec": 0, 00:18:41.273 "r_mbytes_per_sec": 0, 00:18:41.273 "w_mbytes_per_sec": 0 00:18:41.273 }, 00:18:41.273 "claimed": false, 00:18:41.273 "zoned": false, 00:18:41.273 "supported_io_types": { 00:18:41.273 "read": true, 00:18:41.273 "write": true, 00:18:41.273 "unmap": false, 00:18:41.273 "flush": false, 00:18:41.273 "reset": true, 00:18:41.273 "nvme_admin": false, 00:18:41.273 "nvme_io": false, 00:18:41.273 "nvme_io_md": false, 00:18:41.273 "write_zeroes": true, 00:18:41.273 "zcopy": false, 00:18:41.273 "get_zone_info": false, 00:18:41.273 "zone_management": false, 00:18:41.273 "zone_append": false, 00:18:41.273 "compare": false, 00:18:41.273 "compare_and_write": false, 00:18:41.273 "abort": false, 00:18:41.273 "seek_hole": false, 00:18:41.273 "seek_data": false, 00:18:41.273 "copy": false, 00:18:41.273 "nvme_iov_md": false 00:18:41.273 }, 00:18:41.273 "driver_specific": { 00:18:41.273 "raid": { 00:18:41.273 "uuid": "2ebd010a-1459-4469-8e41-37407cb5f70e", 00:18:41.273 "strip_size_kb": 64, 00:18:41.273 "state": "online", 00:18:41.273 "raid_level": "raid5f", 00:18:41.273 "superblock": false, 00:18:41.273 "num_base_bdevs": 4, 00:18:41.273 "num_base_bdevs_discovered": 4, 00:18:41.273 "num_base_bdevs_operational": 4, 00:18:41.273 "base_bdevs_list": [ 00:18:41.273 { 00:18:41.273 "name": "BaseBdev1", 00:18:41.273 "uuid": "966b3a80-5660-4f70-9822-8ea371bc6752", 00:18:41.273 "is_configured": true, 00:18:41.273 "data_offset": 0, 00:18:41.273 "data_size": 65536 00:18:41.273 }, 00:18:41.273 { 00:18:41.273 "name": "BaseBdev2", 00:18:41.273 "uuid": "f87cb922-0be7-481f-bdb3-1ad3e45bdb9b", 00:18:41.273 "is_configured": true, 00:18:41.273 "data_offset": 0, 00:18:41.273 "data_size": 65536 00:18:41.273 }, 00:18:41.273 { 00:18:41.273 "name": "BaseBdev3", 00:18:41.273 "uuid": "b7ff02fc-bca1-457a-922b-2481162261a2", 00:18:41.273 "is_configured": true, 00:18:41.273 "data_offset": 0, 00:18:41.273 "data_size": 65536 00:18:41.273 }, 00:18:41.273 { 00:18:41.273 "name": "BaseBdev4", 00:18:41.273 "uuid": "27acb9cd-7df8-4339-b123-456505acd327", 00:18:41.273 "is_configured": true, 00:18:41.273 "data_offset": 0, 00:18:41.273 "data_size": 65536 00:18:41.273 } 00:18:41.273 ] 00:18:41.273 } 00:18:41.273 } 00:18:41.273 }' 00:18:41.273 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:41.274 BaseBdev2 00:18:41.274 BaseBdev3 00:18:41.274 BaseBdev4' 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.274 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.533 10:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.533 [2024-11-15 10:47:11.937174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.533 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.533 "name": "Existed_Raid", 00:18:41.533 "uuid": "2ebd010a-1459-4469-8e41-37407cb5f70e", 00:18:41.533 "strip_size_kb": 64, 00:18:41.533 "state": "online", 00:18:41.533 "raid_level": "raid5f", 00:18:41.533 "superblock": false, 00:18:41.533 "num_base_bdevs": 4, 00:18:41.533 "num_base_bdevs_discovered": 3, 00:18:41.533 "num_base_bdevs_operational": 3, 00:18:41.533 "base_bdevs_list": [ 00:18:41.534 { 00:18:41.534 "name": null, 00:18:41.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.534 "is_configured": false, 00:18:41.534 "data_offset": 0, 00:18:41.534 "data_size": 65536 00:18:41.534 }, 00:18:41.534 { 00:18:41.534 "name": "BaseBdev2", 00:18:41.534 "uuid": "f87cb922-0be7-481f-bdb3-1ad3e45bdb9b", 00:18:41.534 "is_configured": true, 00:18:41.534 "data_offset": 0, 00:18:41.534 "data_size": 65536 00:18:41.534 }, 00:18:41.534 { 00:18:41.534 "name": "BaseBdev3", 00:18:41.534 "uuid": "b7ff02fc-bca1-457a-922b-2481162261a2", 00:18:41.534 "is_configured": true, 00:18:41.534 "data_offset": 0, 00:18:41.534 "data_size": 65536 00:18:41.534 }, 00:18:41.534 { 00:18:41.534 "name": "BaseBdev4", 00:18:41.534 "uuid": "27acb9cd-7df8-4339-b123-456505acd327", 00:18:41.534 "is_configured": true, 00:18:41.534 "data_offset": 0, 00:18:41.534 "data_size": 65536 00:18:41.534 } 00:18:41.534 ] 00:18:41.534 }' 00:18:41.534 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.534 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.100 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.100 [2024-11-15 10:47:12.610095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.100 [2024-11-15 10:47:12.610224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:42.358 [2024-11-15 10:47:12.692791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.358 [2024-11-15 10:47:12.744834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.358 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.358 [2024-11-15 10:47:12.891282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:42.358 [2024-11-15 10:47:12.891413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:42.619 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.619 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:42.619 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.619 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:42.619 10:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.619 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.619 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.619 10:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.619 BaseBdev2 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.619 [ 00:18:42.619 { 00:18:42.619 "name": "BaseBdev2", 00:18:42.619 "aliases": [ 00:18:42.619 "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56" 00:18:42.619 ], 00:18:42.619 "product_name": "Malloc disk", 00:18:42.619 "block_size": 512, 00:18:42.619 "num_blocks": 65536, 00:18:42.619 "uuid": "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56", 00:18:42.619 "assigned_rate_limits": { 00:18:42.619 "rw_ios_per_sec": 0, 00:18:42.619 "rw_mbytes_per_sec": 0, 00:18:42.619 "r_mbytes_per_sec": 0, 00:18:42.619 "w_mbytes_per_sec": 0 00:18:42.619 }, 00:18:42.619 "claimed": false, 00:18:42.619 "zoned": false, 00:18:42.619 "supported_io_types": { 00:18:42.619 "read": true, 00:18:42.619 "write": true, 00:18:42.619 "unmap": true, 00:18:42.619 "flush": true, 00:18:42.619 "reset": true, 00:18:42.619 "nvme_admin": false, 00:18:42.619 "nvme_io": false, 00:18:42.619 "nvme_io_md": false, 00:18:42.619 "write_zeroes": true, 00:18:42.619 "zcopy": true, 00:18:42.619 "get_zone_info": false, 00:18:42.619 "zone_management": false, 00:18:42.619 "zone_append": false, 00:18:42.619 "compare": false, 00:18:42.619 "compare_and_write": false, 00:18:42.619 "abort": true, 00:18:42.619 "seek_hole": false, 00:18:42.619 "seek_data": false, 00:18:42.619 "copy": true, 00:18:42.619 "nvme_iov_md": false 00:18:42.619 }, 00:18:42.619 "memory_domains": [ 00:18:42.619 { 00:18:42.619 "dma_device_id": "system", 00:18:42.619 "dma_device_type": 1 00:18:42.619 }, 00:18:42.619 { 00:18:42.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.619 "dma_device_type": 2 00:18:42.619 } 00:18:42.619 ], 00:18:42.619 "driver_specific": {} 00:18:42.619 } 00:18:42.619 ] 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.619 BaseBdev3 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.619 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.619 [ 00:18:42.619 { 00:18:42.619 "name": "BaseBdev3", 00:18:42.619 "aliases": [ 00:18:42.619 "aacb37d3-444c-4ffa-9c43-57cda1c021e0" 00:18:42.619 ], 00:18:42.619 "product_name": "Malloc disk", 00:18:42.619 "block_size": 512, 00:18:42.619 "num_blocks": 65536, 00:18:42.619 "uuid": "aacb37d3-444c-4ffa-9c43-57cda1c021e0", 00:18:42.619 "assigned_rate_limits": { 00:18:42.619 "rw_ios_per_sec": 0, 00:18:42.619 "rw_mbytes_per_sec": 0, 00:18:42.619 "r_mbytes_per_sec": 0, 00:18:42.619 "w_mbytes_per_sec": 0 00:18:42.619 }, 00:18:42.619 "claimed": false, 00:18:42.619 "zoned": false, 00:18:42.619 "supported_io_types": { 00:18:42.619 "read": true, 00:18:42.619 "write": true, 00:18:42.619 "unmap": true, 00:18:42.619 "flush": true, 00:18:42.619 "reset": true, 00:18:42.619 "nvme_admin": false, 00:18:42.619 "nvme_io": false, 00:18:42.619 "nvme_io_md": false, 00:18:42.619 "write_zeroes": true, 00:18:42.619 "zcopy": true, 00:18:42.619 "get_zone_info": false, 00:18:42.619 "zone_management": false, 00:18:42.619 "zone_append": false, 00:18:42.619 "compare": false, 00:18:42.619 "compare_and_write": false, 00:18:42.619 "abort": true, 00:18:42.619 "seek_hole": false, 00:18:42.619 "seek_data": false, 00:18:42.619 "copy": true, 00:18:42.619 "nvme_iov_md": false 00:18:42.619 }, 00:18:42.619 "memory_domains": [ 00:18:42.619 { 00:18:42.619 "dma_device_id": "system", 00:18:42.619 "dma_device_type": 1 00:18:42.619 }, 00:18:42.619 { 00:18:42.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.619 "dma_device_type": 2 00:18:42.619 } 00:18:42.882 ], 00:18:42.882 "driver_specific": {} 00:18:42.882 } 00:18:42.882 ] 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.882 BaseBdev4 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.882 [ 00:18:42.882 { 00:18:42.882 "name": "BaseBdev4", 00:18:42.882 "aliases": [ 00:18:42.882 "fca532e8-31fb-41e4-9ee1-5442b01be47c" 00:18:42.882 ], 00:18:42.882 "product_name": "Malloc disk", 00:18:42.882 "block_size": 512, 00:18:42.882 "num_blocks": 65536, 00:18:42.882 "uuid": "fca532e8-31fb-41e4-9ee1-5442b01be47c", 00:18:42.882 "assigned_rate_limits": { 00:18:42.882 "rw_ios_per_sec": 0, 00:18:42.882 "rw_mbytes_per_sec": 0, 00:18:42.882 "r_mbytes_per_sec": 0, 00:18:42.882 "w_mbytes_per_sec": 0 00:18:42.882 }, 00:18:42.882 "claimed": false, 00:18:42.882 "zoned": false, 00:18:42.882 "supported_io_types": { 00:18:42.882 "read": true, 00:18:42.882 "write": true, 00:18:42.882 "unmap": true, 00:18:42.882 "flush": true, 00:18:42.882 "reset": true, 00:18:42.882 "nvme_admin": false, 00:18:42.882 "nvme_io": false, 00:18:42.882 "nvme_io_md": false, 00:18:42.882 "write_zeroes": true, 00:18:42.882 "zcopy": true, 00:18:42.882 "get_zone_info": false, 00:18:42.882 "zone_management": false, 00:18:42.882 "zone_append": false, 00:18:42.882 "compare": false, 00:18:42.882 "compare_and_write": false, 00:18:42.882 "abort": true, 00:18:42.882 "seek_hole": false, 00:18:42.882 "seek_data": false, 00:18:42.882 "copy": true, 00:18:42.882 "nvme_iov_md": false 00:18:42.882 }, 00:18:42.882 "memory_domains": [ 00:18:42.882 { 00:18:42.882 "dma_device_id": "system", 00:18:42.882 "dma_device_type": 1 00:18:42.882 }, 00:18:42.882 { 00:18:42.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.882 "dma_device_type": 2 00:18:42.882 } 00:18:42.882 ], 00:18:42.882 "driver_specific": {} 00:18:42.882 } 00:18:42.882 ] 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.882 [2024-11-15 10:47:13.254393] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:42.882 [2024-11-15 10:47:13.254605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:42.882 [2024-11-15 10:47:13.254661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.882 [2024-11-15 10:47:13.257003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:42.882 [2024-11-15 10:47:13.257078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.882 "name": "Existed_Raid", 00:18:42.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.882 "strip_size_kb": 64, 00:18:42.882 "state": "configuring", 00:18:42.882 "raid_level": "raid5f", 00:18:42.882 "superblock": false, 00:18:42.882 "num_base_bdevs": 4, 00:18:42.882 "num_base_bdevs_discovered": 3, 00:18:42.882 "num_base_bdevs_operational": 4, 00:18:42.882 "base_bdevs_list": [ 00:18:42.882 { 00:18:42.882 "name": "BaseBdev1", 00:18:42.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.882 "is_configured": false, 00:18:42.882 "data_offset": 0, 00:18:42.882 "data_size": 0 00:18:42.882 }, 00:18:42.882 { 00:18:42.882 "name": "BaseBdev2", 00:18:42.882 "uuid": "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56", 00:18:42.882 "is_configured": true, 00:18:42.882 "data_offset": 0, 00:18:42.882 "data_size": 65536 00:18:42.882 }, 00:18:42.882 { 00:18:42.882 "name": "BaseBdev3", 00:18:42.882 "uuid": "aacb37d3-444c-4ffa-9c43-57cda1c021e0", 00:18:42.882 "is_configured": true, 00:18:42.882 "data_offset": 0, 00:18:42.882 "data_size": 65536 00:18:42.882 }, 00:18:42.882 { 00:18:42.882 "name": "BaseBdev4", 00:18:42.882 "uuid": "fca532e8-31fb-41e4-9ee1-5442b01be47c", 00:18:42.882 "is_configured": true, 00:18:42.882 "data_offset": 0, 00:18:42.882 "data_size": 65536 00:18:42.882 } 00:18:42.882 ] 00:18:42.882 }' 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.882 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.450 [2024-11-15 10:47:13.762500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.450 "name": "Existed_Raid", 00:18:43.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.450 "strip_size_kb": 64, 00:18:43.450 "state": "configuring", 00:18:43.450 "raid_level": "raid5f", 00:18:43.450 "superblock": false, 00:18:43.450 "num_base_bdevs": 4, 00:18:43.450 "num_base_bdevs_discovered": 2, 00:18:43.450 "num_base_bdevs_operational": 4, 00:18:43.450 "base_bdevs_list": [ 00:18:43.450 { 00:18:43.450 "name": "BaseBdev1", 00:18:43.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.450 "is_configured": false, 00:18:43.450 "data_offset": 0, 00:18:43.450 "data_size": 0 00:18:43.450 }, 00:18:43.450 { 00:18:43.450 "name": null, 00:18:43.450 "uuid": "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56", 00:18:43.450 "is_configured": false, 00:18:43.450 "data_offset": 0, 00:18:43.450 "data_size": 65536 00:18:43.450 }, 00:18:43.450 { 00:18:43.450 "name": "BaseBdev3", 00:18:43.450 "uuid": "aacb37d3-444c-4ffa-9c43-57cda1c021e0", 00:18:43.450 "is_configured": true, 00:18:43.450 "data_offset": 0, 00:18:43.450 "data_size": 65536 00:18:43.450 }, 00:18:43.450 { 00:18:43.450 "name": "BaseBdev4", 00:18:43.450 "uuid": "fca532e8-31fb-41e4-9ee1-5442b01be47c", 00:18:43.450 "is_configured": true, 00:18:43.450 "data_offset": 0, 00:18:43.450 "data_size": 65536 00:18:43.450 } 00:18:43.450 ] 00:18:43.450 }' 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.450 10:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.017 [2024-11-15 10:47:14.352266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.017 BaseBdev1 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.017 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.017 [ 00:18:44.017 { 00:18:44.017 "name": "BaseBdev1", 00:18:44.017 "aliases": [ 00:18:44.017 "db28653e-33a1-4e87-9c10-2ee14ca539b9" 00:18:44.017 ], 00:18:44.017 "product_name": "Malloc disk", 00:18:44.017 "block_size": 512, 00:18:44.017 "num_blocks": 65536, 00:18:44.017 "uuid": "db28653e-33a1-4e87-9c10-2ee14ca539b9", 00:18:44.017 "assigned_rate_limits": { 00:18:44.017 "rw_ios_per_sec": 0, 00:18:44.017 "rw_mbytes_per_sec": 0, 00:18:44.017 "r_mbytes_per_sec": 0, 00:18:44.017 "w_mbytes_per_sec": 0 00:18:44.017 }, 00:18:44.017 "claimed": true, 00:18:44.017 "claim_type": "exclusive_write", 00:18:44.017 "zoned": false, 00:18:44.017 "supported_io_types": { 00:18:44.017 "read": true, 00:18:44.017 "write": true, 00:18:44.017 "unmap": true, 00:18:44.017 "flush": true, 00:18:44.017 "reset": true, 00:18:44.017 "nvme_admin": false, 00:18:44.018 "nvme_io": false, 00:18:44.018 "nvme_io_md": false, 00:18:44.018 "write_zeroes": true, 00:18:44.018 "zcopy": true, 00:18:44.018 "get_zone_info": false, 00:18:44.018 "zone_management": false, 00:18:44.018 "zone_append": false, 00:18:44.018 "compare": false, 00:18:44.018 "compare_and_write": false, 00:18:44.018 "abort": true, 00:18:44.018 "seek_hole": false, 00:18:44.018 "seek_data": false, 00:18:44.018 "copy": true, 00:18:44.018 "nvme_iov_md": false 00:18:44.018 }, 00:18:44.018 "memory_domains": [ 00:18:44.018 { 00:18:44.018 "dma_device_id": "system", 00:18:44.018 "dma_device_type": 1 00:18:44.018 }, 00:18:44.018 { 00:18:44.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.018 "dma_device_type": 2 00:18:44.018 } 00:18:44.018 ], 00:18:44.018 "driver_specific": {} 00:18:44.018 } 00:18:44.018 ] 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.018 "name": "Existed_Raid", 00:18:44.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.018 "strip_size_kb": 64, 00:18:44.018 "state": "configuring", 00:18:44.018 "raid_level": "raid5f", 00:18:44.018 "superblock": false, 00:18:44.018 "num_base_bdevs": 4, 00:18:44.018 "num_base_bdevs_discovered": 3, 00:18:44.018 "num_base_bdevs_operational": 4, 00:18:44.018 "base_bdevs_list": [ 00:18:44.018 { 00:18:44.018 "name": "BaseBdev1", 00:18:44.018 "uuid": "db28653e-33a1-4e87-9c10-2ee14ca539b9", 00:18:44.018 "is_configured": true, 00:18:44.018 "data_offset": 0, 00:18:44.018 "data_size": 65536 00:18:44.018 }, 00:18:44.018 { 00:18:44.018 "name": null, 00:18:44.018 "uuid": "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56", 00:18:44.018 "is_configured": false, 00:18:44.018 "data_offset": 0, 00:18:44.018 "data_size": 65536 00:18:44.018 }, 00:18:44.018 { 00:18:44.018 "name": "BaseBdev3", 00:18:44.018 "uuid": "aacb37d3-444c-4ffa-9c43-57cda1c021e0", 00:18:44.018 "is_configured": true, 00:18:44.018 "data_offset": 0, 00:18:44.018 "data_size": 65536 00:18:44.018 }, 00:18:44.018 { 00:18:44.018 "name": "BaseBdev4", 00:18:44.018 "uuid": "fca532e8-31fb-41e4-9ee1-5442b01be47c", 00:18:44.018 "is_configured": true, 00:18:44.018 "data_offset": 0, 00:18:44.018 "data_size": 65536 00:18:44.018 } 00:18:44.018 ] 00:18:44.018 }' 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.018 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.585 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:44.585 10:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.585 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.585 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.585 10:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.585 [2024-11-15 10:47:15.024565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.585 "name": "Existed_Raid", 00:18:44.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.585 "strip_size_kb": 64, 00:18:44.585 "state": "configuring", 00:18:44.585 "raid_level": "raid5f", 00:18:44.585 "superblock": false, 00:18:44.585 "num_base_bdevs": 4, 00:18:44.585 "num_base_bdevs_discovered": 2, 00:18:44.585 "num_base_bdevs_operational": 4, 00:18:44.585 "base_bdevs_list": [ 00:18:44.585 { 00:18:44.585 "name": "BaseBdev1", 00:18:44.585 "uuid": "db28653e-33a1-4e87-9c10-2ee14ca539b9", 00:18:44.585 "is_configured": true, 00:18:44.585 "data_offset": 0, 00:18:44.585 "data_size": 65536 00:18:44.585 }, 00:18:44.585 { 00:18:44.585 "name": null, 00:18:44.585 "uuid": "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56", 00:18:44.585 "is_configured": false, 00:18:44.585 "data_offset": 0, 00:18:44.585 "data_size": 65536 00:18:44.585 }, 00:18:44.585 { 00:18:44.585 "name": null, 00:18:44.585 "uuid": "aacb37d3-444c-4ffa-9c43-57cda1c021e0", 00:18:44.585 "is_configured": false, 00:18:44.585 "data_offset": 0, 00:18:44.585 "data_size": 65536 00:18:44.585 }, 00:18:44.585 { 00:18:44.585 "name": "BaseBdev4", 00:18:44.585 "uuid": "fca532e8-31fb-41e4-9ee1-5442b01be47c", 00:18:44.585 "is_configured": true, 00:18:44.585 "data_offset": 0, 00:18:44.585 "data_size": 65536 00:18:44.585 } 00:18:44.585 ] 00:18:44.585 }' 00:18:44.585 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.586 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.153 [2024-11-15 10:47:15.580741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.153 "name": "Existed_Raid", 00:18:45.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.153 "strip_size_kb": 64, 00:18:45.153 "state": "configuring", 00:18:45.153 "raid_level": "raid5f", 00:18:45.153 "superblock": false, 00:18:45.153 "num_base_bdevs": 4, 00:18:45.153 "num_base_bdevs_discovered": 3, 00:18:45.153 "num_base_bdevs_operational": 4, 00:18:45.153 "base_bdevs_list": [ 00:18:45.153 { 00:18:45.153 "name": "BaseBdev1", 00:18:45.153 "uuid": "db28653e-33a1-4e87-9c10-2ee14ca539b9", 00:18:45.153 "is_configured": true, 00:18:45.153 "data_offset": 0, 00:18:45.153 "data_size": 65536 00:18:45.153 }, 00:18:45.153 { 00:18:45.153 "name": null, 00:18:45.153 "uuid": "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56", 00:18:45.153 "is_configured": false, 00:18:45.153 "data_offset": 0, 00:18:45.153 "data_size": 65536 00:18:45.153 }, 00:18:45.153 { 00:18:45.153 "name": "BaseBdev3", 00:18:45.153 "uuid": "aacb37d3-444c-4ffa-9c43-57cda1c021e0", 00:18:45.153 "is_configured": true, 00:18:45.153 "data_offset": 0, 00:18:45.153 "data_size": 65536 00:18:45.153 }, 00:18:45.153 { 00:18:45.153 "name": "BaseBdev4", 00:18:45.153 "uuid": "fca532e8-31fb-41e4-9ee1-5442b01be47c", 00:18:45.153 "is_configured": true, 00:18:45.153 "data_offset": 0, 00:18:45.153 "data_size": 65536 00:18:45.153 } 00:18:45.153 ] 00:18:45.153 }' 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.153 10:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.721 [2024-11-15 10:47:16.136892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.721 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.721 "name": "Existed_Raid", 00:18:45.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.721 "strip_size_kb": 64, 00:18:45.721 "state": "configuring", 00:18:45.721 "raid_level": "raid5f", 00:18:45.721 "superblock": false, 00:18:45.721 "num_base_bdevs": 4, 00:18:45.721 "num_base_bdevs_discovered": 2, 00:18:45.721 "num_base_bdevs_operational": 4, 00:18:45.721 "base_bdevs_list": [ 00:18:45.721 { 00:18:45.721 "name": null, 00:18:45.721 "uuid": "db28653e-33a1-4e87-9c10-2ee14ca539b9", 00:18:45.721 "is_configured": false, 00:18:45.721 "data_offset": 0, 00:18:45.721 "data_size": 65536 00:18:45.721 }, 00:18:45.721 { 00:18:45.721 "name": null, 00:18:45.721 "uuid": "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56", 00:18:45.721 "is_configured": false, 00:18:45.722 "data_offset": 0, 00:18:45.722 "data_size": 65536 00:18:45.722 }, 00:18:45.722 { 00:18:45.722 "name": "BaseBdev3", 00:18:45.722 "uuid": "aacb37d3-444c-4ffa-9c43-57cda1c021e0", 00:18:45.722 "is_configured": true, 00:18:45.722 "data_offset": 0, 00:18:45.722 "data_size": 65536 00:18:45.722 }, 00:18:45.722 { 00:18:45.722 "name": "BaseBdev4", 00:18:45.722 "uuid": "fca532e8-31fb-41e4-9ee1-5442b01be47c", 00:18:45.722 "is_configured": true, 00:18:45.722 "data_offset": 0, 00:18:45.722 "data_size": 65536 00:18:45.722 } 00:18:45.722 ] 00:18:45.722 }' 00:18:45.722 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.722 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.289 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.289 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:46.289 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.289 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.290 [2024-11-15 10:47:16.777011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.290 "name": "Existed_Raid", 00:18:46.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.290 "strip_size_kb": 64, 00:18:46.290 "state": "configuring", 00:18:46.290 "raid_level": "raid5f", 00:18:46.290 "superblock": false, 00:18:46.290 "num_base_bdevs": 4, 00:18:46.290 "num_base_bdevs_discovered": 3, 00:18:46.290 "num_base_bdevs_operational": 4, 00:18:46.290 "base_bdevs_list": [ 00:18:46.290 { 00:18:46.290 "name": null, 00:18:46.290 "uuid": "db28653e-33a1-4e87-9c10-2ee14ca539b9", 00:18:46.290 "is_configured": false, 00:18:46.290 "data_offset": 0, 00:18:46.290 "data_size": 65536 00:18:46.290 }, 00:18:46.290 { 00:18:46.290 "name": "BaseBdev2", 00:18:46.290 "uuid": "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56", 00:18:46.290 "is_configured": true, 00:18:46.290 "data_offset": 0, 00:18:46.290 "data_size": 65536 00:18:46.290 }, 00:18:46.290 { 00:18:46.290 "name": "BaseBdev3", 00:18:46.290 "uuid": "aacb37d3-444c-4ffa-9c43-57cda1c021e0", 00:18:46.290 "is_configured": true, 00:18:46.290 "data_offset": 0, 00:18:46.290 "data_size": 65536 00:18:46.290 }, 00:18:46.290 { 00:18:46.290 "name": "BaseBdev4", 00:18:46.290 "uuid": "fca532e8-31fb-41e4-9ee1-5442b01be47c", 00:18:46.290 "is_configured": true, 00:18:46.290 "data_offset": 0, 00:18:46.290 "data_size": 65536 00:18:46.290 } 00:18:46.290 ] 00:18:46.290 }' 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.290 10:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u db28653e-33a1-4e87-9c10-2ee14ca539b9 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.933 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.933 [2024-11-15 10:47:17.482809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:46.933 [2024-11-15 10:47:17.482883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:46.933 [2024-11-15 10:47:17.482895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:46.933 [2024-11-15 10:47:17.483224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:46.933 [2024-11-15 10:47:17.490328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:46.933 [2024-11-15 10:47:17.490399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:46.933 [2024-11-15 10:47:17.490788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.192 NewBaseBdev 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.192 [ 00:18:47.192 { 00:18:47.192 "name": "NewBaseBdev", 00:18:47.192 "aliases": [ 00:18:47.192 "db28653e-33a1-4e87-9c10-2ee14ca539b9" 00:18:47.192 ], 00:18:47.192 "product_name": "Malloc disk", 00:18:47.192 "block_size": 512, 00:18:47.192 "num_blocks": 65536, 00:18:47.192 "uuid": "db28653e-33a1-4e87-9c10-2ee14ca539b9", 00:18:47.192 "assigned_rate_limits": { 00:18:47.192 "rw_ios_per_sec": 0, 00:18:47.192 "rw_mbytes_per_sec": 0, 00:18:47.192 "r_mbytes_per_sec": 0, 00:18:47.192 "w_mbytes_per_sec": 0 00:18:47.192 }, 00:18:47.192 "claimed": true, 00:18:47.192 "claim_type": "exclusive_write", 00:18:47.192 "zoned": false, 00:18:47.192 "supported_io_types": { 00:18:47.192 "read": true, 00:18:47.192 "write": true, 00:18:47.192 "unmap": true, 00:18:47.192 "flush": true, 00:18:47.192 "reset": true, 00:18:47.192 "nvme_admin": false, 00:18:47.192 "nvme_io": false, 00:18:47.192 "nvme_io_md": false, 00:18:47.192 "write_zeroes": true, 00:18:47.192 "zcopy": true, 00:18:47.192 "get_zone_info": false, 00:18:47.192 "zone_management": false, 00:18:47.192 "zone_append": false, 00:18:47.192 "compare": false, 00:18:47.192 "compare_and_write": false, 00:18:47.192 "abort": true, 00:18:47.192 "seek_hole": false, 00:18:47.192 "seek_data": false, 00:18:47.192 "copy": true, 00:18:47.192 "nvme_iov_md": false 00:18:47.192 }, 00:18:47.192 "memory_domains": [ 00:18:47.192 { 00:18:47.192 "dma_device_id": "system", 00:18:47.192 "dma_device_type": 1 00:18:47.192 }, 00:18:47.192 { 00:18:47.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.192 "dma_device_type": 2 00:18:47.192 } 00:18:47.192 ], 00:18:47.192 "driver_specific": {} 00:18:47.192 } 00:18:47.192 ] 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.192 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.192 "name": "Existed_Raid", 00:18:47.192 "uuid": "4f73d342-7027-4896-bdcc-e5d791c09439", 00:18:47.192 "strip_size_kb": 64, 00:18:47.192 "state": "online", 00:18:47.192 "raid_level": "raid5f", 00:18:47.192 "superblock": false, 00:18:47.192 "num_base_bdevs": 4, 00:18:47.192 "num_base_bdevs_discovered": 4, 00:18:47.192 "num_base_bdevs_operational": 4, 00:18:47.192 "base_bdevs_list": [ 00:18:47.192 { 00:18:47.192 "name": "NewBaseBdev", 00:18:47.192 "uuid": "db28653e-33a1-4e87-9c10-2ee14ca539b9", 00:18:47.192 "is_configured": true, 00:18:47.192 "data_offset": 0, 00:18:47.192 "data_size": 65536 00:18:47.192 }, 00:18:47.192 { 00:18:47.192 "name": "BaseBdev2", 00:18:47.192 "uuid": "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56", 00:18:47.192 "is_configured": true, 00:18:47.192 "data_offset": 0, 00:18:47.192 "data_size": 65536 00:18:47.192 }, 00:18:47.192 { 00:18:47.192 "name": "BaseBdev3", 00:18:47.192 "uuid": "aacb37d3-444c-4ffa-9c43-57cda1c021e0", 00:18:47.192 "is_configured": true, 00:18:47.192 "data_offset": 0, 00:18:47.192 "data_size": 65536 00:18:47.192 }, 00:18:47.192 { 00:18:47.192 "name": "BaseBdev4", 00:18:47.192 "uuid": "fca532e8-31fb-41e4-9ee1-5442b01be47c", 00:18:47.192 "is_configured": true, 00:18:47.192 "data_offset": 0, 00:18:47.192 "data_size": 65536 00:18:47.192 } 00:18:47.193 ] 00:18:47.193 }' 00:18:47.193 10:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.193 10:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.758 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:47.758 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:47.758 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:47.758 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:47.758 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.759 [2024-11-15 10:47:18.046316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:47.759 "name": "Existed_Raid", 00:18:47.759 "aliases": [ 00:18:47.759 "4f73d342-7027-4896-bdcc-e5d791c09439" 00:18:47.759 ], 00:18:47.759 "product_name": "Raid Volume", 00:18:47.759 "block_size": 512, 00:18:47.759 "num_blocks": 196608, 00:18:47.759 "uuid": "4f73d342-7027-4896-bdcc-e5d791c09439", 00:18:47.759 "assigned_rate_limits": { 00:18:47.759 "rw_ios_per_sec": 0, 00:18:47.759 "rw_mbytes_per_sec": 0, 00:18:47.759 "r_mbytes_per_sec": 0, 00:18:47.759 "w_mbytes_per_sec": 0 00:18:47.759 }, 00:18:47.759 "claimed": false, 00:18:47.759 "zoned": false, 00:18:47.759 "supported_io_types": { 00:18:47.759 "read": true, 00:18:47.759 "write": true, 00:18:47.759 "unmap": false, 00:18:47.759 "flush": false, 00:18:47.759 "reset": true, 00:18:47.759 "nvme_admin": false, 00:18:47.759 "nvme_io": false, 00:18:47.759 "nvme_io_md": false, 00:18:47.759 "write_zeroes": true, 00:18:47.759 "zcopy": false, 00:18:47.759 "get_zone_info": false, 00:18:47.759 "zone_management": false, 00:18:47.759 "zone_append": false, 00:18:47.759 "compare": false, 00:18:47.759 "compare_and_write": false, 00:18:47.759 "abort": false, 00:18:47.759 "seek_hole": false, 00:18:47.759 "seek_data": false, 00:18:47.759 "copy": false, 00:18:47.759 "nvme_iov_md": false 00:18:47.759 }, 00:18:47.759 "driver_specific": { 00:18:47.759 "raid": { 00:18:47.759 "uuid": "4f73d342-7027-4896-bdcc-e5d791c09439", 00:18:47.759 "strip_size_kb": 64, 00:18:47.759 "state": "online", 00:18:47.759 "raid_level": "raid5f", 00:18:47.759 "superblock": false, 00:18:47.759 "num_base_bdevs": 4, 00:18:47.759 "num_base_bdevs_discovered": 4, 00:18:47.759 "num_base_bdevs_operational": 4, 00:18:47.759 "base_bdevs_list": [ 00:18:47.759 { 00:18:47.759 "name": "NewBaseBdev", 00:18:47.759 "uuid": "db28653e-33a1-4e87-9c10-2ee14ca539b9", 00:18:47.759 "is_configured": true, 00:18:47.759 "data_offset": 0, 00:18:47.759 "data_size": 65536 00:18:47.759 }, 00:18:47.759 { 00:18:47.759 "name": "BaseBdev2", 00:18:47.759 "uuid": "6d5b4c6d-805d-4b41-b071-0d6ca61e9b56", 00:18:47.759 "is_configured": true, 00:18:47.759 "data_offset": 0, 00:18:47.759 "data_size": 65536 00:18:47.759 }, 00:18:47.759 { 00:18:47.759 "name": "BaseBdev3", 00:18:47.759 "uuid": "aacb37d3-444c-4ffa-9c43-57cda1c021e0", 00:18:47.759 "is_configured": true, 00:18:47.759 "data_offset": 0, 00:18:47.759 "data_size": 65536 00:18:47.759 }, 00:18:47.759 { 00:18:47.759 "name": "BaseBdev4", 00:18:47.759 "uuid": "fca532e8-31fb-41e4-9ee1-5442b01be47c", 00:18:47.759 "is_configured": true, 00:18:47.759 "data_offset": 0, 00:18:47.759 "data_size": 65536 00:18:47.759 } 00:18:47.759 ] 00:18:47.759 } 00:18:47.759 } 00:18:47.759 }' 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:47.759 BaseBdev2 00:18:47.759 BaseBdev3 00:18:47.759 BaseBdev4' 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.759 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.018 [2024-11-15 10:47:18.474217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:48.018 [2024-11-15 10:47:18.474534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.018 [2024-11-15 10:47:18.474861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.018 [2024-11-15 10:47:18.475655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.018 [2024-11-15 10:47:18.475850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83311 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83311 ']' 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83311 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83311 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:48.018 killing process with pid 83311 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83311' 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 83311 00:18:48.018 [2024-11-15 10:47:18.512396] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:48.018 10:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 83311 00:18:48.585 [2024-11-15 10:47:18.846514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.519 10:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:49.519 00:18:49.519 real 0m13.015s 00:18:49.519 user 0m21.861s 00:18:49.519 sys 0m1.661s 00:18:49.519 10:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:49.519 10:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.519 ************************************ 00:18:49.519 END TEST raid5f_state_function_test 00:18:49.519 ************************************ 00:18:49.519 10:47:19 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:18:49.520 10:47:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:49.520 10:47:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:49.520 10:47:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.520 ************************************ 00:18:49.520 START TEST raid5f_state_function_test_sb 00:18:49.520 ************************************ 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:49.520 Process raid pid: 83994 00:18:49.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83994 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83994' 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83994 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83994 ']' 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:49.520 10:47:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.520 [2024-11-15 10:47:19.997400] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:18:49.520 [2024-11-15 10:47:19.997792] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.778 [2024-11-15 10:47:20.218416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.778 [2024-11-15 10:47:20.328534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.036 [2024-11-15 10:47:20.516300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.036 [2024-11-15 10:47:20.516645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.603 [2024-11-15 10:47:21.013572] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.603 [2024-11-15 10:47:21.013820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.603 [2024-11-15 10:47:21.013849] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.603 [2024-11-15 10:47:21.013868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.603 [2024-11-15 10:47:21.013878] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.603 [2024-11-15 10:47:21.013892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.603 [2024-11-15 10:47:21.013901] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:50.603 [2024-11-15 10:47:21.013915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.603 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.603 "name": "Existed_Raid", 00:18:50.603 "uuid": "d5061c1f-d813-43d1-9455-e97108efbb31", 00:18:50.603 "strip_size_kb": 64, 00:18:50.603 "state": "configuring", 00:18:50.603 "raid_level": "raid5f", 00:18:50.603 "superblock": true, 00:18:50.603 "num_base_bdevs": 4, 00:18:50.603 "num_base_bdevs_discovered": 0, 00:18:50.603 "num_base_bdevs_operational": 4, 00:18:50.603 "base_bdevs_list": [ 00:18:50.603 { 00:18:50.603 "name": "BaseBdev1", 00:18:50.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.604 "is_configured": false, 00:18:50.604 "data_offset": 0, 00:18:50.604 "data_size": 0 00:18:50.604 }, 00:18:50.604 { 00:18:50.604 "name": "BaseBdev2", 00:18:50.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.604 "is_configured": false, 00:18:50.604 "data_offset": 0, 00:18:50.604 "data_size": 0 00:18:50.604 }, 00:18:50.604 { 00:18:50.604 "name": "BaseBdev3", 00:18:50.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.604 "is_configured": false, 00:18:50.604 "data_offset": 0, 00:18:50.604 "data_size": 0 00:18:50.604 }, 00:18:50.604 { 00:18:50.604 "name": "BaseBdev4", 00:18:50.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.604 "is_configured": false, 00:18:50.604 "data_offset": 0, 00:18:50.604 "data_size": 0 00:18:50.604 } 00:18:50.604 ] 00:18:50.604 }' 00:18:50.604 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.604 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.171 [2024-11-15 10:47:21.517649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:51.171 [2024-11-15 10:47:21.517841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.171 [2024-11-15 10:47:21.529670] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:51.171 [2024-11-15 10:47:21.529727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:51.171 [2024-11-15 10:47:21.529743] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:51.171 [2024-11-15 10:47:21.529759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:51.171 [2024-11-15 10:47:21.529769] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:51.171 [2024-11-15 10:47:21.529783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:51.171 [2024-11-15 10:47:21.529793] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:51.171 [2024-11-15 10:47:21.529806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.171 [2024-11-15 10:47:21.570244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.171 BaseBdev1 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.171 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.171 [ 00:18:51.171 { 00:18:51.171 "name": "BaseBdev1", 00:18:51.171 "aliases": [ 00:18:51.171 "9adcf42a-6ee4-4040-8b4f-1ba206f3bec4" 00:18:51.171 ], 00:18:51.171 "product_name": "Malloc disk", 00:18:51.171 "block_size": 512, 00:18:51.171 "num_blocks": 65536, 00:18:51.171 "uuid": "9adcf42a-6ee4-4040-8b4f-1ba206f3bec4", 00:18:51.171 "assigned_rate_limits": { 00:18:51.171 "rw_ios_per_sec": 0, 00:18:51.171 "rw_mbytes_per_sec": 0, 00:18:51.171 "r_mbytes_per_sec": 0, 00:18:51.171 "w_mbytes_per_sec": 0 00:18:51.171 }, 00:18:51.171 "claimed": true, 00:18:51.171 "claim_type": "exclusive_write", 00:18:51.171 "zoned": false, 00:18:51.171 "supported_io_types": { 00:18:51.171 "read": true, 00:18:51.171 "write": true, 00:18:51.171 "unmap": true, 00:18:51.171 "flush": true, 00:18:51.171 "reset": true, 00:18:51.171 "nvme_admin": false, 00:18:51.171 "nvme_io": false, 00:18:51.171 "nvme_io_md": false, 00:18:51.171 "write_zeroes": true, 00:18:51.171 "zcopy": true, 00:18:51.171 "get_zone_info": false, 00:18:51.171 "zone_management": false, 00:18:51.171 "zone_append": false, 00:18:51.171 "compare": false, 00:18:51.171 "compare_and_write": false, 00:18:51.171 "abort": true, 00:18:51.171 "seek_hole": false, 00:18:51.171 "seek_data": false, 00:18:51.171 "copy": true, 00:18:51.171 "nvme_iov_md": false 00:18:51.171 }, 00:18:51.171 "memory_domains": [ 00:18:51.171 { 00:18:51.171 "dma_device_id": "system", 00:18:51.171 "dma_device_type": 1 00:18:51.171 }, 00:18:51.171 { 00:18:51.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.171 "dma_device_type": 2 00:18:51.171 } 00:18:51.171 ], 00:18:51.172 "driver_specific": {} 00:18:51.172 } 00:18:51.172 ] 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.172 "name": "Existed_Raid", 00:18:51.172 "uuid": "97a45f12-9f85-4789-97ea-0cac30e57b0a", 00:18:51.172 "strip_size_kb": 64, 00:18:51.172 "state": "configuring", 00:18:51.172 "raid_level": "raid5f", 00:18:51.172 "superblock": true, 00:18:51.172 "num_base_bdevs": 4, 00:18:51.172 "num_base_bdevs_discovered": 1, 00:18:51.172 "num_base_bdevs_operational": 4, 00:18:51.172 "base_bdevs_list": [ 00:18:51.172 { 00:18:51.172 "name": "BaseBdev1", 00:18:51.172 "uuid": "9adcf42a-6ee4-4040-8b4f-1ba206f3bec4", 00:18:51.172 "is_configured": true, 00:18:51.172 "data_offset": 2048, 00:18:51.172 "data_size": 63488 00:18:51.172 }, 00:18:51.172 { 00:18:51.172 "name": "BaseBdev2", 00:18:51.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.172 "is_configured": false, 00:18:51.172 "data_offset": 0, 00:18:51.172 "data_size": 0 00:18:51.172 }, 00:18:51.172 { 00:18:51.172 "name": "BaseBdev3", 00:18:51.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.172 "is_configured": false, 00:18:51.172 "data_offset": 0, 00:18:51.172 "data_size": 0 00:18:51.172 }, 00:18:51.172 { 00:18:51.172 "name": "BaseBdev4", 00:18:51.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.172 "is_configured": false, 00:18:51.172 "data_offset": 0, 00:18:51.172 "data_size": 0 00:18:51.172 } 00:18:51.172 ] 00:18:51.172 }' 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.172 10:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.739 [2024-11-15 10:47:22.138448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:51.739 [2024-11-15 10:47:22.138657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.739 [2024-11-15 10:47:22.150529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.739 [2024-11-15 10:47:22.152780] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:51.739 [2024-11-15 10:47:22.152835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:51.739 [2024-11-15 10:47:22.152852] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:51.739 [2024-11-15 10:47:22.152870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:51.739 [2024-11-15 10:47:22.152880] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:51.739 [2024-11-15 10:47:22.152894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.739 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.740 "name": "Existed_Raid", 00:18:51.740 "uuid": "fd33027f-7407-454b-ae79-c4790dc8b9c4", 00:18:51.740 "strip_size_kb": 64, 00:18:51.740 "state": "configuring", 00:18:51.740 "raid_level": "raid5f", 00:18:51.740 "superblock": true, 00:18:51.740 "num_base_bdevs": 4, 00:18:51.740 "num_base_bdevs_discovered": 1, 00:18:51.740 "num_base_bdevs_operational": 4, 00:18:51.740 "base_bdevs_list": [ 00:18:51.740 { 00:18:51.740 "name": "BaseBdev1", 00:18:51.740 "uuid": "9adcf42a-6ee4-4040-8b4f-1ba206f3bec4", 00:18:51.740 "is_configured": true, 00:18:51.740 "data_offset": 2048, 00:18:51.740 "data_size": 63488 00:18:51.740 }, 00:18:51.740 { 00:18:51.740 "name": "BaseBdev2", 00:18:51.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.740 "is_configured": false, 00:18:51.740 "data_offset": 0, 00:18:51.740 "data_size": 0 00:18:51.740 }, 00:18:51.740 { 00:18:51.740 "name": "BaseBdev3", 00:18:51.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.740 "is_configured": false, 00:18:51.740 "data_offset": 0, 00:18:51.740 "data_size": 0 00:18:51.740 }, 00:18:51.740 { 00:18:51.740 "name": "BaseBdev4", 00:18:51.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.740 "is_configured": false, 00:18:51.740 "data_offset": 0, 00:18:51.740 "data_size": 0 00:18:51.740 } 00:18:51.740 ] 00:18:51.740 }' 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.740 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.308 [2024-11-15 10:47:22.677039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.308 BaseBdev2 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.308 [ 00:18:52.308 { 00:18:52.308 "name": "BaseBdev2", 00:18:52.308 "aliases": [ 00:18:52.308 "1595e163-7f61-429a-9521-785f70b7ddcd" 00:18:52.308 ], 00:18:52.308 "product_name": "Malloc disk", 00:18:52.308 "block_size": 512, 00:18:52.308 "num_blocks": 65536, 00:18:52.308 "uuid": "1595e163-7f61-429a-9521-785f70b7ddcd", 00:18:52.308 "assigned_rate_limits": { 00:18:52.308 "rw_ios_per_sec": 0, 00:18:52.308 "rw_mbytes_per_sec": 0, 00:18:52.308 "r_mbytes_per_sec": 0, 00:18:52.308 "w_mbytes_per_sec": 0 00:18:52.308 }, 00:18:52.308 "claimed": true, 00:18:52.308 "claim_type": "exclusive_write", 00:18:52.308 "zoned": false, 00:18:52.308 "supported_io_types": { 00:18:52.308 "read": true, 00:18:52.308 "write": true, 00:18:52.308 "unmap": true, 00:18:52.308 "flush": true, 00:18:52.308 "reset": true, 00:18:52.308 "nvme_admin": false, 00:18:52.308 "nvme_io": false, 00:18:52.308 "nvme_io_md": false, 00:18:52.308 "write_zeroes": true, 00:18:52.308 "zcopy": true, 00:18:52.308 "get_zone_info": false, 00:18:52.308 "zone_management": false, 00:18:52.308 "zone_append": false, 00:18:52.308 "compare": false, 00:18:52.308 "compare_and_write": false, 00:18:52.308 "abort": true, 00:18:52.308 "seek_hole": false, 00:18:52.308 "seek_data": false, 00:18:52.308 "copy": true, 00:18:52.308 "nvme_iov_md": false 00:18:52.308 }, 00:18:52.308 "memory_domains": [ 00:18:52.308 { 00:18:52.308 "dma_device_id": "system", 00:18:52.308 "dma_device_type": 1 00:18:52.308 }, 00:18:52.308 { 00:18:52.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.308 "dma_device_type": 2 00:18:52.308 } 00:18:52.308 ], 00:18:52.308 "driver_specific": {} 00:18:52.308 } 00:18:52.308 ] 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.308 "name": "Existed_Raid", 00:18:52.308 "uuid": "fd33027f-7407-454b-ae79-c4790dc8b9c4", 00:18:52.308 "strip_size_kb": 64, 00:18:52.308 "state": "configuring", 00:18:52.308 "raid_level": "raid5f", 00:18:52.308 "superblock": true, 00:18:52.308 "num_base_bdevs": 4, 00:18:52.308 "num_base_bdevs_discovered": 2, 00:18:52.308 "num_base_bdevs_operational": 4, 00:18:52.308 "base_bdevs_list": [ 00:18:52.308 { 00:18:52.308 "name": "BaseBdev1", 00:18:52.308 "uuid": "9adcf42a-6ee4-4040-8b4f-1ba206f3bec4", 00:18:52.308 "is_configured": true, 00:18:52.308 "data_offset": 2048, 00:18:52.308 "data_size": 63488 00:18:52.308 }, 00:18:52.308 { 00:18:52.308 "name": "BaseBdev2", 00:18:52.308 "uuid": "1595e163-7f61-429a-9521-785f70b7ddcd", 00:18:52.308 "is_configured": true, 00:18:52.308 "data_offset": 2048, 00:18:52.308 "data_size": 63488 00:18:52.308 }, 00:18:52.308 { 00:18:52.308 "name": "BaseBdev3", 00:18:52.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.308 "is_configured": false, 00:18:52.308 "data_offset": 0, 00:18:52.308 "data_size": 0 00:18:52.308 }, 00:18:52.308 { 00:18:52.308 "name": "BaseBdev4", 00:18:52.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.308 "is_configured": false, 00:18:52.308 "data_offset": 0, 00:18:52.308 "data_size": 0 00:18:52.308 } 00:18:52.308 ] 00:18:52.308 }' 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.308 10:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.875 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.875 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.875 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.875 [2024-11-15 10:47:23.286549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.875 BaseBdev3 00:18:52.875 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.876 [ 00:18:52.876 { 00:18:52.876 "name": "BaseBdev3", 00:18:52.876 "aliases": [ 00:18:52.876 "fb6ed4d9-e121-42ff-b112-01e9eac817ed" 00:18:52.876 ], 00:18:52.876 "product_name": "Malloc disk", 00:18:52.876 "block_size": 512, 00:18:52.876 "num_blocks": 65536, 00:18:52.876 "uuid": "fb6ed4d9-e121-42ff-b112-01e9eac817ed", 00:18:52.876 "assigned_rate_limits": { 00:18:52.876 "rw_ios_per_sec": 0, 00:18:52.876 "rw_mbytes_per_sec": 0, 00:18:52.876 "r_mbytes_per_sec": 0, 00:18:52.876 "w_mbytes_per_sec": 0 00:18:52.876 }, 00:18:52.876 "claimed": true, 00:18:52.876 "claim_type": "exclusive_write", 00:18:52.876 "zoned": false, 00:18:52.876 "supported_io_types": { 00:18:52.876 "read": true, 00:18:52.876 "write": true, 00:18:52.876 "unmap": true, 00:18:52.876 "flush": true, 00:18:52.876 "reset": true, 00:18:52.876 "nvme_admin": false, 00:18:52.876 "nvme_io": false, 00:18:52.876 "nvme_io_md": false, 00:18:52.876 "write_zeroes": true, 00:18:52.876 "zcopy": true, 00:18:52.876 "get_zone_info": false, 00:18:52.876 "zone_management": false, 00:18:52.876 "zone_append": false, 00:18:52.876 "compare": false, 00:18:52.876 "compare_and_write": false, 00:18:52.876 "abort": true, 00:18:52.876 "seek_hole": false, 00:18:52.876 "seek_data": false, 00:18:52.876 "copy": true, 00:18:52.876 "nvme_iov_md": false 00:18:52.876 }, 00:18:52.876 "memory_domains": [ 00:18:52.876 { 00:18:52.876 "dma_device_id": "system", 00:18:52.876 "dma_device_type": 1 00:18:52.876 }, 00:18:52.876 { 00:18:52.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.876 "dma_device_type": 2 00:18:52.876 } 00:18:52.876 ], 00:18:52.876 "driver_specific": {} 00:18:52.876 } 00:18:52.876 ] 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.876 "name": "Existed_Raid", 00:18:52.876 "uuid": "fd33027f-7407-454b-ae79-c4790dc8b9c4", 00:18:52.876 "strip_size_kb": 64, 00:18:52.876 "state": "configuring", 00:18:52.876 "raid_level": "raid5f", 00:18:52.876 "superblock": true, 00:18:52.876 "num_base_bdevs": 4, 00:18:52.876 "num_base_bdevs_discovered": 3, 00:18:52.876 "num_base_bdevs_operational": 4, 00:18:52.876 "base_bdevs_list": [ 00:18:52.876 { 00:18:52.876 "name": "BaseBdev1", 00:18:52.876 "uuid": "9adcf42a-6ee4-4040-8b4f-1ba206f3bec4", 00:18:52.876 "is_configured": true, 00:18:52.876 "data_offset": 2048, 00:18:52.876 "data_size": 63488 00:18:52.876 }, 00:18:52.876 { 00:18:52.876 "name": "BaseBdev2", 00:18:52.876 "uuid": "1595e163-7f61-429a-9521-785f70b7ddcd", 00:18:52.876 "is_configured": true, 00:18:52.876 "data_offset": 2048, 00:18:52.876 "data_size": 63488 00:18:52.876 }, 00:18:52.876 { 00:18:52.876 "name": "BaseBdev3", 00:18:52.876 "uuid": "fb6ed4d9-e121-42ff-b112-01e9eac817ed", 00:18:52.876 "is_configured": true, 00:18:52.876 "data_offset": 2048, 00:18:52.876 "data_size": 63488 00:18:52.876 }, 00:18:52.876 { 00:18:52.876 "name": "BaseBdev4", 00:18:52.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.876 "is_configured": false, 00:18:52.876 "data_offset": 0, 00:18:52.876 "data_size": 0 00:18:52.876 } 00:18:52.876 ] 00:18:52.876 }' 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.876 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.444 [2024-11-15 10:47:23.892878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:53.444 [2024-11-15 10:47:23.893234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:53.444 [2024-11-15 10:47:23.893256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:53.444 BaseBdev4 00:18:53.444 [2024-11-15 10:47:23.893605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.444 [2024-11-15 10:47:23.900535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:53.444 [2024-11-15 10:47:23.900586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:53.444 [2024-11-15 10:47:23.900901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.444 [ 00:18:53.444 { 00:18:53.444 "name": "BaseBdev4", 00:18:53.444 "aliases": [ 00:18:53.444 "1ee9f56b-9620-4060-a3b1-cfa47a26340d" 00:18:53.444 ], 00:18:53.444 "product_name": "Malloc disk", 00:18:53.444 "block_size": 512, 00:18:53.444 "num_blocks": 65536, 00:18:53.444 "uuid": "1ee9f56b-9620-4060-a3b1-cfa47a26340d", 00:18:53.444 "assigned_rate_limits": { 00:18:53.444 "rw_ios_per_sec": 0, 00:18:53.444 "rw_mbytes_per_sec": 0, 00:18:53.444 "r_mbytes_per_sec": 0, 00:18:53.444 "w_mbytes_per_sec": 0 00:18:53.444 }, 00:18:53.444 "claimed": true, 00:18:53.444 "claim_type": "exclusive_write", 00:18:53.444 "zoned": false, 00:18:53.444 "supported_io_types": { 00:18:53.444 "read": true, 00:18:53.444 "write": true, 00:18:53.444 "unmap": true, 00:18:53.444 "flush": true, 00:18:53.444 "reset": true, 00:18:53.444 "nvme_admin": false, 00:18:53.444 "nvme_io": false, 00:18:53.444 "nvme_io_md": false, 00:18:53.444 "write_zeroes": true, 00:18:53.444 "zcopy": true, 00:18:53.444 "get_zone_info": false, 00:18:53.444 "zone_management": false, 00:18:53.444 "zone_append": false, 00:18:53.444 "compare": false, 00:18:53.444 "compare_and_write": false, 00:18:53.444 "abort": true, 00:18:53.444 "seek_hole": false, 00:18:53.444 "seek_data": false, 00:18:53.444 "copy": true, 00:18:53.444 "nvme_iov_md": false 00:18:53.444 }, 00:18:53.444 "memory_domains": [ 00:18:53.444 { 00:18:53.444 "dma_device_id": "system", 00:18:53.444 "dma_device_type": 1 00:18:53.444 }, 00:18:53.444 { 00:18:53.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.444 "dma_device_type": 2 00:18:53.444 } 00:18:53.444 ], 00:18:53.444 "driver_specific": {} 00:18:53.444 } 00:18:53.444 ] 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.444 "name": "Existed_Raid", 00:18:53.444 "uuid": "fd33027f-7407-454b-ae79-c4790dc8b9c4", 00:18:53.444 "strip_size_kb": 64, 00:18:53.444 "state": "online", 00:18:53.444 "raid_level": "raid5f", 00:18:53.444 "superblock": true, 00:18:53.444 "num_base_bdevs": 4, 00:18:53.444 "num_base_bdevs_discovered": 4, 00:18:53.444 "num_base_bdevs_operational": 4, 00:18:53.444 "base_bdevs_list": [ 00:18:53.444 { 00:18:53.444 "name": "BaseBdev1", 00:18:53.444 "uuid": "9adcf42a-6ee4-4040-8b4f-1ba206f3bec4", 00:18:53.444 "is_configured": true, 00:18:53.444 "data_offset": 2048, 00:18:53.444 "data_size": 63488 00:18:53.444 }, 00:18:53.444 { 00:18:53.444 "name": "BaseBdev2", 00:18:53.444 "uuid": "1595e163-7f61-429a-9521-785f70b7ddcd", 00:18:53.444 "is_configured": true, 00:18:53.444 "data_offset": 2048, 00:18:53.444 "data_size": 63488 00:18:53.444 }, 00:18:53.444 { 00:18:53.444 "name": "BaseBdev3", 00:18:53.444 "uuid": "fb6ed4d9-e121-42ff-b112-01e9eac817ed", 00:18:53.444 "is_configured": true, 00:18:53.444 "data_offset": 2048, 00:18:53.444 "data_size": 63488 00:18:53.444 }, 00:18:53.444 { 00:18:53.444 "name": "BaseBdev4", 00:18:53.444 "uuid": "1ee9f56b-9620-4060-a3b1-cfa47a26340d", 00:18:53.444 "is_configured": true, 00:18:53.444 "data_offset": 2048, 00:18:53.444 "data_size": 63488 00:18:53.444 } 00:18:53.444 ] 00:18:53.444 }' 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.444 10:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.012 [2024-11-15 10:47:24.456157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:54.012 "name": "Existed_Raid", 00:18:54.012 "aliases": [ 00:18:54.012 "fd33027f-7407-454b-ae79-c4790dc8b9c4" 00:18:54.012 ], 00:18:54.012 "product_name": "Raid Volume", 00:18:54.012 "block_size": 512, 00:18:54.012 "num_blocks": 190464, 00:18:54.012 "uuid": "fd33027f-7407-454b-ae79-c4790dc8b9c4", 00:18:54.012 "assigned_rate_limits": { 00:18:54.012 "rw_ios_per_sec": 0, 00:18:54.012 "rw_mbytes_per_sec": 0, 00:18:54.012 "r_mbytes_per_sec": 0, 00:18:54.012 "w_mbytes_per_sec": 0 00:18:54.012 }, 00:18:54.012 "claimed": false, 00:18:54.012 "zoned": false, 00:18:54.012 "supported_io_types": { 00:18:54.012 "read": true, 00:18:54.012 "write": true, 00:18:54.012 "unmap": false, 00:18:54.012 "flush": false, 00:18:54.012 "reset": true, 00:18:54.012 "nvme_admin": false, 00:18:54.012 "nvme_io": false, 00:18:54.012 "nvme_io_md": false, 00:18:54.012 "write_zeroes": true, 00:18:54.012 "zcopy": false, 00:18:54.012 "get_zone_info": false, 00:18:54.012 "zone_management": false, 00:18:54.012 "zone_append": false, 00:18:54.012 "compare": false, 00:18:54.012 "compare_and_write": false, 00:18:54.012 "abort": false, 00:18:54.012 "seek_hole": false, 00:18:54.012 "seek_data": false, 00:18:54.012 "copy": false, 00:18:54.012 "nvme_iov_md": false 00:18:54.012 }, 00:18:54.012 "driver_specific": { 00:18:54.012 "raid": { 00:18:54.012 "uuid": "fd33027f-7407-454b-ae79-c4790dc8b9c4", 00:18:54.012 "strip_size_kb": 64, 00:18:54.012 "state": "online", 00:18:54.012 "raid_level": "raid5f", 00:18:54.012 "superblock": true, 00:18:54.012 "num_base_bdevs": 4, 00:18:54.012 "num_base_bdevs_discovered": 4, 00:18:54.012 "num_base_bdevs_operational": 4, 00:18:54.012 "base_bdevs_list": [ 00:18:54.012 { 00:18:54.012 "name": "BaseBdev1", 00:18:54.012 "uuid": "9adcf42a-6ee4-4040-8b4f-1ba206f3bec4", 00:18:54.012 "is_configured": true, 00:18:54.012 "data_offset": 2048, 00:18:54.012 "data_size": 63488 00:18:54.012 }, 00:18:54.012 { 00:18:54.012 "name": "BaseBdev2", 00:18:54.012 "uuid": "1595e163-7f61-429a-9521-785f70b7ddcd", 00:18:54.012 "is_configured": true, 00:18:54.012 "data_offset": 2048, 00:18:54.012 "data_size": 63488 00:18:54.012 }, 00:18:54.012 { 00:18:54.012 "name": "BaseBdev3", 00:18:54.012 "uuid": "fb6ed4d9-e121-42ff-b112-01e9eac817ed", 00:18:54.012 "is_configured": true, 00:18:54.012 "data_offset": 2048, 00:18:54.012 "data_size": 63488 00:18:54.012 }, 00:18:54.012 { 00:18:54.012 "name": "BaseBdev4", 00:18:54.012 "uuid": "1ee9f56b-9620-4060-a3b1-cfa47a26340d", 00:18:54.012 "is_configured": true, 00:18:54.012 "data_offset": 2048, 00:18:54.012 "data_size": 63488 00:18:54.012 } 00:18:54.012 ] 00:18:54.012 } 00:18:54.012 } 00:18:54.012 }' 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:54.012 BaseBdev2 00:18:54.012 BaseBdev3 00:18:54.012 BaseBdev4' 00:18:54.012 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.271 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.271 [2024-11-15 10:47:24.824144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.530 "name": "Existed_Raid", 00:18:54.530 "uuid": "fd33027f-7407-454b-ae79-c4790dc8b9c4", 00:18:54.530 "strip_size_kb": 64, 00:18:54.530 "state": "online", 00:18:54.530 "raid_level": "raid5f", 00:18:54.530 "superblock": true, 00:18:54.530 "num_base_bdevs": 4, 00:18:54.530 "num_base_bdevs_discovered": 3, 00:18:54.530 "num_base_bdevs_operational": 3, 00:18:54.530 "base_bdevs_list": [ 00:18:54.530 { 00:18:54.530 "name": null, 00:18:54.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.530 "is_configured": false, 00:18:54.530 "data_offset": 0, 00:18:54.530 "data_size": 63488 00:18:54.530 }, 00:18:54.530 { 00:18:54.530 "name": "BaseBdev2", 00:18:54.530 "uuid": "1595e163-7f61-429a-9521-785f70b7ddcd", 00:18:54.530 "is_configured": true, 00:18:54.530 "data_offset": 2048, 00:18:54.530 "data_size": 63488 00:18:54.530 }, 00:18:54.530 { 00:18:54.530 "name": "BaseBdev3", 00:18:54.530 "uuid": "fb6ed4d9-e121-42ff-b112-01e9eac817ed", 00:18:54.530 "is_configured": true, 00:18:54.530 "data_offset": 2048, 00:18:54.530 "data_size": 63488 00:18:54.530 }, 00:18:54.530 { 00:18:54.530 "name": "BaseBdev4", 00:18:54.530 "uuid": "1ee9f56b-9620-4060-a3b1-cfa47a26340d", 00:18:54.530 "is_configured": true, 00:18:54.530 "data_offset": 2048, 00:18:54.530 "data_size": 63488 00:18:54.530 } 00:18:54.530 ] 00:18:54.530 }' 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.530 10:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.097 [2024-11-15 10:47:25.494691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:55.097 [2024-11-15 10:47:25.494899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.097 [2024-11-15 10:47:25.578131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.097 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:55.098 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:55.098 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:55.098 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.098 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.098 [2024-11-15 10:47:25.642218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.356 [2024-11-15 10:47:25.783415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:55.356 [2024-11-15 10:47:25.783614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.356 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.616 BaseBdev2 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.616 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.616 [ 00:18:55.616 { 00:18:55.616 "name": "BaseBdev2", 00:18:55.616 "aliases": [ 00:18:55.616 "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6" 00:18:55.616 ], 00:18:55.616 "product_name": "Malloc disk", 00:18:55.616 "block_size": 512, 00:18:55.616 "num_blocks": 65536, 00:18:55.616 "uuid": "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6", 00:18:55.616 "assigned_rate_limits": { 00:18:55.616 "rw_ios_per_sec": 0, 00:18:55.616 "rw_mbytes_per_sec": 0, 00:18:55.616 "r_mbytes_per_sec": 0, 00:18:55.616 "w_mbytes_per_sec": 0 00:18:55.616 }, 00:18:55.616 "claimed": false, 00:18:55.616 "zoned": false, 00:18:55.616 "supported_io_types": { 00:18:55.616 "read": true, 00:18:55.616 "write": true, 00:18:55.616 "unmap": true, 00:18:55.616 "flush": true, 00:18:55.616 "reset": true, 00:18:55.616 "nvme_admin": false, 00:18:55.616 "nvme_io": false, 00:18:55.616 "nvme_io_md": false, 00:18:55.616 "write_zeroes": true, 00:18:55.616 "zcopy": true, 00:18:55.616 "get_zone_info": false, 00:18:55.616 "zone_management": false, 00:18:55.616 "zone_append": false, 00:18:55.616 "compare": false, 00:18:55.616 "compare_and_write": false, 00:18:55.616 "abort": true, 00:18:55.616 "seek_hole": false, 00:18:55.616 "seek_data": false, 00:18:55.616 "copy": true, 00:18:55.616 "nvme_iov_md": false 00:18:55.616 }, 00:18:55.616 "memory_domains": [ 00:18:55.617 { 00:18:55.617 "dma_device_id": "system", 00:18:55.617 "dma_device_type": 1 00:18:55.617 }, 00:18:55.617 { 00:18:55.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.617 "dma_device_type": 2 00:18:55.617 } 00:18:55.617 ], 00:18:55.617 "driver_specific": {} 00:18:55.617 } 00:18:55.617 ] 00:18:55.617 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.617 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:55.617 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:55.617 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:55.617 10:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:55.617 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.617 10:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.617 BaseBdev3 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.617 [ 00:18:55.617 { 00:18:55.617 "name": "BaseBdev3", 00:18:55.617 "aliases": [ 00:18:55.617 "0609c887-c714-4281-8493-1dd432e9dfe0" 00:18:55.617 ], 00:18:55.617 "product_name": "Malloc disk", 00:18:55.617 "block_size": 512, 00:18:55.617 "num_blocks": 65536, 00:18:55.617 "uuid": "0609c887-c714-4281-8493-1dd432e9dfe0", 00:18:55.617 "assigned_rate_limits": { 00:18:55.617 "rw_ios_per_sec": 0, 00:18:55.617 "rw_mbytes_per_sec": 0, 00:18:55.617 "r_mbytes_per_sec": 0, 00:18:55.617 "w_mbytes_per_sec": 0 00:18:55.617 }, 00:18:55.617 "claimed": false, 00:18:55.617 "zoned": false, 00:18:55.617 "supported_io_types": { 00:18:55.617 "read": true, 00:18:55.617 "write": true, 00:18:55.617 "unmap": true, 00:18:55.617 "flush": true, 00:18:55.617 "reset": true, 00:18:55.617 "nvme_admin": false, 00:18:55.617 "nvme_io": false, 00:18:55.617 "nvme_io_md": false, 00:18:55.617 "write_zeroes": true, 00:18:55.617 "zcopy": true, 00:18:55.617 "get_zone_info": false, 00:18:55.617 "zone_management": false, 00:18:55.617 "zone_append": false, 00:18:55.617 "compare": false, 00:18:55.617 "compare_and_write": false, 00:18:55.617 "abort": true, 00:18:55.617 "seek_hole": false, 00:18:55.617 "seek_data": false, 00:18:55.617 "copy": true, 00:18:55.617 "nvme_iov_md": false 00:18:55.617 }, 00:18:55.617 "memory_domains": [ 00:18:55.617 { 00:18:55.617 "dma_device_id": "system", 00:18:55.617 "dma_device_type": 1 00:18:55.617 }, 00:18:55.617 { 00:18:55.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.617 "dma_device_type": 2 00:18:55.617 } 00:18:55.617 ], 00:18:55.617 "driver_specific": {} 00:18:55.617 } 00:18:55.617 ] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.617 BaseBdev4 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.617 [ 00:18:55.617 { 00:18:55.617 "name": "BaseBdev4", 00:18:55.617 "aliases": [ 00:18:55.617 "0c41ecbc-e54b-4261-97aa-24ea7d962bff" 00:18:55.617 ], 00:18:55.617 "product_name": "Malloc disk", 00:18:55.617 "block_size": 512, 00:18:55.617 "num_blocks": 65536, 00:18:55.617 "uuid": "0c41ecbc-e54b-4261-97aa-24ea7d962bff", 00:18:55.617 "assigned_rate_limits": { 00:18:55.617 "rw_ios_per_sec": 0, 00:18:55.617 "rw_mbytes_per_sec": 0, 00:18:55.617 "r_mbytes_per_sec": 0, 00:18:55.617 "w_mbytes_per_sec": 0 00:18:55.617 }, 00:18:55.617 "claimed": false, 00:18:55.617 "zoned": false, 00:18:55.617 "supported_io_types": { 00:18:55.617 "read": true, 00:18:55.617 "write": true, 00:18:55.617 "unmap": true, 00:18:55.617 "flush": true, 00:18:55.617 "reset": true, 00:18:55.617 "nvme_admin": false, 00:18:55.617 "nvme_io": false, 00:18:55.617 "nvme_io_md": false, 00:18:55.617 "write_zeroes": true, 00:18:55.617 "zcopy": true, 00:18:55.617 "get_zone_info": false, 00:18:55.617 "zone_management": false, 00:18:55.617 "zone_append": false, 00:18:55.617 "compare": false, 00:18:55.617 "compare_and_write": false, 00:18:55.617 "abort": true, 00:18:55.617 "seek_hole": false, 00:18:55.617 "seek_data": false, 00:18:55.617 "copy": true, 00:18:55.617 "nvme_iov_md": false 00:18:55.617 }, 00:18:55.617 "memory_domains": [ 00:18:55.617 { 00:18:55.617 "dma_device_id": "system", 00:18:55.617 "dma_device_type": 1 00:18:55.617 }, 00:18:55.617 { 00:18:55.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.617 "dma_device_type": 2 00:18:55.617 } 00:18:55.617 ], 00:18:55.617 "driver_specific": {} 00:18:55.617 } 00:18:55.617 ] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.617 [2024-11-15 10:47:26.137224] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:55.617 [2024-11-15 10:47:26.137287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:55.617 [2024-11-15 10:47:26.137325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:55.617 [2024-11-15 10:47:26.139683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:55.617 [2024-11-15 10:47:26.139762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.617 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.618 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.618 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.618 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.618 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.618 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.618 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.876 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.876 "name": "Existed_Raid", 00:18:55.876 "uuid": "0d275646-553a-48b9-af08-f65917e78b33", 00:18:55.876 "strip_size_kb": 64, 00:18:55.876 "state": "configuring", 00:18:55.876 "raid_level": "raid5f", 00:18:55.876 "superblock": true, 00:18:55.876 "num_base_bdevs": 4, 00:18:55.876 "num_base_bdevs_discovered": 3, 00:18:55.876 "num_base_bdevs_operational": 4, 00:18:55.876 "base_bdevs_list": [ 00:18:55.876 { 00:18:55.876 "name": "BaseBdev1", 00:18:55.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.876 "is_configured": false, 00:18:55.876 "data_offset": 0, 00:18:55.876 "data_size": 0 00:18:55.876 }, 00:18:55.876 { 00:18:55.876 "name": "BaseBdev2", 00:18:55.876 "uuid": "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6", 00:18:55.876 "is_configured": true, 00:18:55.876 "data_offset": 2048, 00:18:55.876 "data_size": 63488 00:18:55.876 }, 00:18:55.876 { 00:18:55.876 "name": "BaseBdev3", 00:18:55.876 "uuid": "0609c887-c714-4281-8493-1dd432e9dfe0", 00:18:55.876 "is_configured": true, 00:18:55.876 "data_offset": 2048, 00:18:55.876 "data_size": 63488 00:18:55.876 }, 00:18:55.876 { 00:18:55.876 "name": "BaseBdev4", 00:18:55.876 "uuid": "0c41ecbc-e54b-4261-97aa-24ea7d962bff", 00:18:55.876 "is_configured": true, 00:18:55.876 "data_offset": 2048, 00:18:55.876 "data_size": 63488 00:18:55.876 } 00:18:55.876 ] 00:18:55.876 }' 00:18:55.876 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.876 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.134 [2024-11-15 10:47:26.677320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.134 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.394 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.394 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.394 "name": "Existed_Raid", 00:18:56.394 "uuid": "0d275646-553a-48b9-af08-f65917e78b33", 00:18:56.394 "strip_size_kb": 64, 00:18:56.394 "state": "configuring", 00:18:56.394 "raid_level": "raid5f", 00:18:56.394 "superblock": true, 00:18:56.394 "num_base_bdevs": 4, 00:18:56.394 "num_base_bdevs_discovered": 2, 00:18:56.394 "num_base_bdevs_operational": 4, 00:18:56.394 "base_bdevs_list": [ 00:18:56.394 { 00:18:56.394 "name": "BaseBdev1", 00:18:56.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.394 "is_configured": false, 00:18:56.394 "data_offset": 0, 00:18:56.394 "data_size": 0 00:18:56.394 }, 00:18:56.394 { 00:18:56.394 "name": null, 00:18:56.394 "uuid": "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6", 00:18:56.394 "is_configured": false, 00:18:56.394 "data_offset": 0, 00:18:56.394 "data_size": 63488 00:18:56.394 }, 00:18:56.394 { 00:18:56.394 "name": "BaseBdev3", 00:18:56.394 "uuid": "0609c887-c714-4281-8493-1dd432e9dfe0", 00:18:56.394 "is_configured": true, 00:18:56.394 "data_offset": 2048, 00:18:56.394 "data_size": 63488 00:18:56.394 }, 00:18:56.394 { 00:18:56.394 "name": "BaseBdev4", 00:18:56.394 "uuid": "0c41ecbc-e54b-4261-97aa-24ea7d962bff", 00:18:56.394 "is_configured": true, 00:18:56.394 "data_offset": 2048, 00:18:56.394 "data_size": 63488 00:18:56.394 } 00:18:56.394 ] 00:18:56.394 }' 00:18:56.394 10:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.394 10:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.961 [2024-11-15 10:47:27.308076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:56.961 BaseBdev1 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.961 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.962 [ 00:18:56.962 { 00:18:56.962 "name": "BaseBdev1", 00:18:56.962 "aliases": [ 00:18:56.962 "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0" 00:18:56.962 ], 00:18:56.962 "product_name": "Malloc disk", 00:18:56.962 "block_size": 512, 00:18:56.962 "num_blocks": 65536, 00:18:56.962 "uuid": "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0", 00:18:56.962 "assigned_rate_limits": { 00:18:56.962 "rw_ios_per_sec": 0, 00:18:56.962 "rw_mbytes_per_sec": 0, 00:18:56.962 "r_mbytes_per_sec": 0, 00:18:56.962 "w_mbytes_per_sec": 0 00:18:56.962 }, 00:18:56.962 "claimed": true, 00:18:56.962 "claim_type": "exclusive_write", 00:18:56.962 "zoned": false, 00:18:56.962 "supported_io_types": { 00:18:56.962 "read": true, 00:18:56.962 "write": true, 00:18:56.962 "unmap": true, 00:18:56.962 "flush": true, 00:18:56.962 "reset": true, 00:18:56.962 "nvme_admin": false, 00:18:56.962 "nvme_io": false, 00:18:56.962 "nvme_io_md": false, 00:18:56.962 "write_zeroes": true, 00:18:56.962 "zcopy": true, 00:18:56.962 "get_zone_info": false, 00:18:56.962 "zone_management": false, 00:18:56.962 "zone_append": false, 00:18:56.962 "compare": false, 00:18:56.962 "compare_and_write": false, 00:18:56.962 "abort": true, 00:18:56.962 "seek_hole": false, 00:18:56.962 "seek_data": false, 00:18:56.962 "copy": true, 00:18:56.962 "nvme_iov_md": false 00:18:56.962 }, 00:18:56.962 "memory_domains": [ 00:18:56.962 { 00:18:56.962 "dma_device_id": "system", 00:18:56.962 "dma_device_type": 1 00:18:56.962 }, 00:18:56.962 { 00:18:56.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.962 "dma_device_type": 2 00:18:56.962 } 00:18:56.962 ], 00:18:56.962 "driver_specific": {} 00:18:56.962 } 00:18:56.962 ] 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.962 "name": "Existed_Raid", 00:18:56.962 "uuid": "0d275646-553a-48b9-af08-f65917e78b33", 00:18:56.962 "strip_size_kb": 64, 00:18:56.962 "state": "configuring", 00:18:56.962 "raid_level": "raid5f", 00:18:56.962 "superblock": true, 00:18:56.962 "num_base_bdevs": 4, 00:18:56.962 "num_base_bdevs_discovered": 3, 00:18:56.962 "num_base_bdevs_operational": 4, 00:18:56.962 "base_bdevs_list": [ 00:18:56.962 { 00:18:56.962 "name": "BaseBdev1", 00:18:56.962 "uuid": "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0", 00:18:56.962 "is_configured": true, 00:18:56.962 "data_offset": 2048, 00:18:56.962 "data_size": 63488 00:18:56.962 }, 00:18:56.962 { 00:18:56.962 "name": null, 00:18:56.962 "uuid": "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6", 00:18:56.962 "is_configured": false, 00:18:56.962 "data_offset": 0, 00:18:56.962 "data_size": 63488 00:18:56.962 }, 00:18:56.962 { 00:18:56.962 "name": "BaseBdev3", 00:18:56.962 "uuid": "0609c887-c714-4281-8493-1dd432e9dfe0", 00:18:56.962 "is_configured": true, 00:18:56.962 "data_offset": 2048, 00:18:56.962 "data_size": 63488 00:18:56.962 }, 00:18:56.962 { 00:18:56.962 "name": "BaseBdev4", 00:18:56.962 "uuid": "0c41ecbc-e54b-4261-97aa-24ea7d962bff", 00:18:56.962 "is_configured": true, 00:18:56.962 "data_offset": 2048, 00:18:56.962 "data_size": 63488 00:18:56.962 } 00:18:56.962 ] 00:18:56.962 }' 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.962 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.529 [2024-11-15 10:47:27.952410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.529 10:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.529 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.529 "name": "Existed_Raid", 00:18:57.529 "uuid": "0d275646-553a-48b9-af08-f65917e78b33", 00:18:57.529 "strip_size_kb": 64, 00:18:57.529 "state": "configuring", 00:18:57.529 "raid_level": "raid5f", 00:18:57.529 "superblock": true, 00:18:57.529 "num_base_bdevs": 4, 00:18:57.529 "num_base_bdevs_discovered": 2, 00:18:57.529 "num_base_bdevs_operational": 4, 00:18:57.529 "base_bdevs_list": [ 00:18:57.530 { 00:18:57.530 "name": "BaseBdev1", 00:18:57.530 "uuid": "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0", 00:18:57.530 "is_configured": true, 00:18:57.530 "data_offset": 2048, 00:18:57.530 "data_size": 63488 00:18:57.530 }, 00:18:57.530 { 00:18:57.530 "name": null, 00:18:57.530 "uuid": "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6", 00:18:57.530 "is_configured": false, 00:18:57.530 "data_offset": 0, 00:18:57.530 "data_size": 63488 00:18:57.530 }, 00:18:57.530 { 00:18:57.530 "name": null, 00:18:57.530 "uuid": "0609c887-c714-4281-8493-1dd432e9dfe0", 00:18:57.530 "is_configured": false, 00:18:57.530 "data_offset": 0, 00:18:57.530 "data_size": 63488 00:18:57.530 }, 00:18:57.530 { 00:18:57.530 "name": "BaseBdev4", 00:18:57.530 "uuid": "0c41ecbc-e54b-4261-97aa-24ea7d962bff", 00:18:57.530 "is_configured": true, 00:18:57.530 "data_offset": 2048, 00:18:57.530 "data_size": 63488 00:18:57.530 } 00:18:57.530 ] 00:18:57.530 }' 00:18:57.530 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.530 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.097 [2024-11-15 10:47:28.504569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.097 "name": "Existed_Raid", 00:18:58.097 "uuid": "0d275646-553a-48b9-af08-f65917e78b33", 00:18:58.097 "strip_size_kb": 64, 00:18:58.097 "state": "configuring", 00:18:58.097 "raid_level": "raid5f", 00:18:58.097 "superblock": true, 00:18:58.097 "num_base_bdevs": 4, 00:18:58.097 "num_base_bdevs_discovered": 3, 00:18:58.097 "num_base_bdevs_operational": 4, 00:18:58.097 "base_bdevs_list": [ 00:18:58.097 { 00:18:58.097 "name": "BaseBdev1", 00:18:58.097 "uuid": "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0", 00:18:58.097 "is_configured": true, 00:18:58.097 "data_offset": 2048, 00:18:58.097 "data_size": 63488 00:18:58.097 }, 00:18:58.097 { 00:18:58.097 "name": null, 00:18:58.097 "uuid": "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6", 00:18:58.097 "is_configured": false, 00:18:58.097 "data_offset": 0, 00:18:58.097 "data_size": 63488 00:18:58.097 }, 00:18:58.097 { 00:18:58.097 "name": "BaseBdev3", 00:18:58.097 "uuid": "0609c887-c714-4281-8493-1dd432e9dfe0", 00:18:58.097 "is_configured": true, 00:18:58.097 "data_offset": 2048, 00:18:58.097 "data_size": 63488 00:18:58.097 }, 00:18:58.097 { 00:18:58.097 "name": "BaseBdev4", 00:18:58.097 "uuid": "0c41ecbc-e54b-4261-97aa-24ea7d962bff", 00:18:58.097 "is_configured": true, 00:18:58.097 "data_offset": 2048, 00:18:58.097 "data_size": 63488 00:18:58.097 } 00:18:58.097 ] 00:18:58.097 }' 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.097 10:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.664 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.664 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.664 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:58.664 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.664 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.664 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:58.664 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:58.664 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.664 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.664 [2024-11-15 10:47:29.148760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.923 "name": "Existed_Raid", 00:18:58.923 "uuid": "0d275646-553a-48b9-af08-f65917e78b33", 00:18:58.923 "strip_size_kb": 64, 00:18:58.923 "state": "configuring", 00:18:58.923 "raid_level": "raid5f", 00:18:58.923 "superblock": true, 00:18:58.923 "num_base_bdevs": 4, 00:18:58.923 "num_base_bdevs_discovered": 2, 00:18:58.923 "num_base_bdevs_operational": 4, 00:18:58.923 "base_bdevs_list": [ 00:18:58.923 { 00:18:58.923 "name": null, 00:18:58.923 "uuid": "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0", 00:18:58.923 "is_configured": false, 00:18:58.923 "data_offset": 0, 00:18:58.923 "data_size": 63488 00:18:58.923 }, 00:18:58.923 { 00:18:58.923 "name": null, 00:18:58.923 "uuid": "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6", 00:18:58.923 "is_configured": false, 00:18:58.923 "data_offset": 0, 00:18:58.923 "data_size": 63488 00:18:58.923 }, 00:18:58.923 { 00:18:58.923 "name": "BaseBdev3", 00:18:58.923 "uuid": "0609c887-c714-4281-8493-1dd432e9dfe0", 00:18:58.923 "is_configured": true, 00:18:58.923 "data_offset": 2048, 00:18:58.923 "data_size": 63488 00:18:58.923 }, 00:18:58.923 { 00:18:58.923 "name": "BaseBdev4", 00:18:58.923 "uuid": "0c41ecbc-e54b-4261-97aa-24ea7d962bff", 00:18:58.923 "is_configured": true, 00:18:58.923 "data_offset": 2048, 00:18:58.923 "data_size": 63488 00:18:58.923 } 00:18:58.923 ] 00:18:58.923 }' 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.923 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.490 [2024-11-15 10:47:29.809652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.490 "name": "Existed_Raid", 00:18:59.490 "uuid": "0d275646-553a-48b9-af08-f65917e78b33", 00:18:59.490 "strip_size_kb": 64, 00:18:59.490 "state": "configuring", 00:18:59.490 "raid_level": "raid5f", 00:18:59.490 "superblock": true, 00:18:59.490 "num_base_bdevs": 4, 00:18:59.490 "num_base_bdevs_discovered": 3, 00:18:59.490 "num_base_bdevs_operational": 4, 00:18:59.490 "base_bdevs_list": [ 00:18:59.490 { 00:18:59.490 "name": null, 00:18:59.490 "uuid": "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0", 00:18:59.490 "is_configured": false, 00:18:59.490 "data_offset": 0, 00:18:59.490 "data_size": 63488 00:18:59.490 }, 00:18:59.490 { 00:18:59.490 "name": "BaseBdev2", 00:18:59.490 "uuid": "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6", 00:18:59.490 "is_configured": true, 00:18:59.490 "data_offset": 2048, 00:18:59.490 "data_size": 63488 00:18:59.490 }, 00:18:59.490 { 00:18:59.490 "name": "BaseBdev3", 00:18:59.490 "uuid": "0609c887-c714-4281-8493-1dd432e9dfe0", 00:18:59.490 "is_configured": true, 00:18:59.490 "data_offset": 2048, 00:18:59.490 "data_size": 63488 00:18:59.490 }, 00:18:59.490 { 00:18:59.490 "name": "BaseBdev4", 00:18:59.490 "uuid": "0c41ecbc-e54b-4261-97aa-24ea7d962bff", 00:18:59.490 "is_configured": true, 00:18:59.490 "data_offset": 2048, 00:18:59.490 "data_size": 63488 00:18:59.490 } 00:18:59.490 ] 00:18:59.490 }' 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.490 10:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.058 [2024-11-15 10:47:30.447248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:00.058 [2024-11-15 10:47:30.447586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:00.058 [2024-11-15 10:47:30.447607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:00.058 NewBaseBdev 00:19:00.058 [2024-11-15 10:47:30.447927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.058 [2024-11-15 10:47:30.454242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:00.058 [2024-11-15 10:47:30.454278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:00.058 [2024-11-15 10:47:30.454600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.058 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.059 [ 00:19:00.059 { 00:19:00.059 "name": "NewBaseBdev", 00:19:00.059 "aliases": [ 00:19:00.059 "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0" 00:19:00.059 ], 00:19:00.059 "product_name": "Malloc disk", 00:19:00.059 "block_size": 512, 00:19:00.059 "num_blocks": 65536, 00:19:00.059 "uuid": "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0", 00:19:00.059 "assigned_rate_limits": { 00:19:00.059 "rw_ios_per_sec": 0, 00:19:00.059 "rw_mbytes_per_sec": 0, 00:19:00.059 "r_mbytes_per_sec": 0, 00:19:00.059 "w_mbytes_per_sec": 0 00:19:00.059 }, 00:19:00.059 "claimed": true, 00:19:00.059 "claim_type": "exclusive_write", 00:19:00.059 "zoned": false, 00:19:00.059 "supported_io_types": { 00:19:00.059 "read": true, 00:19:00.059 "write": true, 00:19:00.059 "unmap": true, 00:19:00.059 "flush": true, 00:19:00.059 "reset": true, 00:19:00.059 "nvme_admin": false, 00:19:00.059 "nvme_io": false, 00:19:00.059 "nvme_io_md": false, 00:19:00.059 "write_zeroes": true, 00:19:00.059 "zcopy": true, 00:19:00.059 "get_zone_info": false, 00:19:00.059 "zone_management": false, 00:19:00.059 "zone_append": false, 00:19:00.059 "compare": false, 00:19:00.059 "compare_and_write": false, 00:19:00.059 "abort": true, 00:19:00.059 "seek_hole": false, 00:19:00.059 "seek_data": false, 00:19:00.059 "copy": true, 00:19:00.059 "nvme_iov_md": false 00:19:00.059 }, 00:19:00.059 "memory_domains": [ 00:19:00.059 { 00:19:00.059 "dma_device_id": "system", 00:19:00.059 "dma_device_type": 1 00:19:00.059 }, 00:19:00.059 { 00:19:00.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.059 "dma_device_type": 2 00:19:00.059 } 00:19:00.059 ], 00:19:00.059 "driver_specific": {} 00:19:00.059 } 00:19:00.059 ] 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.059 "name": "Existed_Raid", 00:19:00.059 "uuid": "0d275646-553a-48b9-af08-f65917e78b33", 00:19:00.059 "strip_size_kb": 64, 00:19:00.059 "state": "online", 00:19:00.059 "raid_level": "raid5f", 00:19:00.059 "superblock": true, 00:19:00.059 "num_base_bdevs": 4, 00:19:00.059 "num_base_bdevs_discovered": 4, 00:19:00.059 "num_base_bdevs_operational": 4, 00:19:00.059 "base_bdevs_list": [ 00:19:00.059 { 00:19:00.059 "name": "NewBaseBdev", 00:19:00.059 "uuid": "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0", 00:19:00.059 "is_configured": true, 00:19:00.059 "data_offset": 2048, 00:19:00.059 "data_size": 63488 00:19:00.059 }, 00:19:00.059 { 00:19:00.059 "name": "BaseBdev2", 00:19:00.059 "uuid": "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6", 00:19:00.059 "is_configured": true, 00:19:00.059 "data_offset": 2048, 00:19:00.059 "data_size": 63488 00:19:00.059 }, 00:19:00.059 { 00:19:00.059 "name": "BaseBdev3", 00:19:00.059 "uuid": "0609c887-c714-4281-8493-1dd432e9dfe0", 00:19:00.059 "is_configured": true, 00:19:00.059 "data_offset": 2048, 00:19:00.059 "data_size": 63488 00:19:00.059 }, 00:19:00.059 { 00:19:00.059 "name": "BaseBdev4", 00:19:00.059 "uuid": "0c41ecbc-e54b-4261-97aa-24ea7d962bff", 00:19:00.059 "is_configured": true, 00:19:00.059 "data_offset": 2048, 00:19:00.059 "data_size": 63488 00:19:00.059 } 00:19:00.059 ] 00:19:00.059 }' 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.059 10:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.627 [2024-11-15 10:47:31.041777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:00.627 "name": "Existed_Raid", 00:19:00.627 "aliases": [ 00:19:00.627 "0d275646-553a-48b9-af08-f65917e78b33" 00:19:00.627 ], 00:19:00.627 "product_name": "Raid Volume", 00:19:00.627 "block_size": 512, 00:19:00.627 "num_blocks": 190464, 00:19:00.627 "uuid": "0d275646-553a-48b9-af08-f65917e78b33", 00:19:00.627 "assigned_rate_limits": { 00:19:00.627 "rw_ios_per_sec": 0, 00:19:00.627 "rw_mbytes_per_sec": 0, 00:19:00.627 "r_mbytes_per_sec": 0, 00:19:00.627 "w_mbytes_per_sec": 0 00:19:00.627 }, 00:19:00.627 "claimed": false, 00:19:00.627 "zoned": false, 00:19:00.627 "supported_io_types": { 00:19:00.627 "read": true, 00:19:00.627 "write": true, 00:19:00.627 "unmap": false, 00:19:00.627 "flush": false, 00:19:00.627 "reset": true, 00:19:00.627 "nvme_admin": false, 00:19:00.627 "nvme_io": false, 00:19:00.627 "nvme_io_md": false, 00:19:00.627 "write_zeroes": true, 00:19:00.627 "zcopy": false, 00:19:00.627 "get_zone_info": false, 00:19:00.627 "zone_management": false, 00:19:00.627 "zone_append": false, 00:19:00.627 "compare": false, 00:19:00.627 "compare_and_write": false, 00:19:00.627 "abort": false, 00:19:00.627 "seek_hole": false, 00:19:00.627 "seek_data": false, 00:19:00.627 "copy": false, 00:19:00.627 "nvme_iov_md": false 00:19:00.627 }, 00:19:00.627 "driver_specific": { 00:19:00.627 "raid": { 00:19:00.627 "uuid": "0d275646-553a-48b9-af08-f65917e78b33", 00:19:00.627 "strip_size_kb": 64, 00:19:00.627 "state": "online", 00:19:00.627 "raid_level": "raid5f", 00:19:00.627 "superblock": true, 00:19:00.627 "num_base_bdevs": 4, 00:19:00.627 "num_base_bdevs_discovered": 4, 00:19:00.627 "num_base_bdevs_operational": 4, 00:19:00.627 "base_bdevs_list": [ 00:19:00.627 { 00:19:00.627 "name": "NewBaseBdev", 00:19:00.627 "uuid": "acf0f061-d8be-4a05-b3dd-ef8f86d2a7e0", 00:19:00.627 "is_configured": true, 00:19:00.627 "data_offset": 2048, 00:19:00.627 "data_size": 63488 00:19:00.627 }, 00:19:00.627 { 00:19:00.627 "name": "BaseBdev2", 00:19:00.627 "uuid": "10f1f99b-931c-4cbf-8b3f-ee0c3d23fcc6", 00:19:00.627 "is_configured": true, 00:19:00.627 "data_offset": 2048, 00:19:00.627 "data_size": 63488 00:19:00.627 }, 00:19:00.627 { 00:19:00.627 "name": "BaseBdev3", 00:19:00.627 "uuid": "0609c887-c714-4281-8493-1dd432e9dfe0", 00:19:00.627 "is_configured": true, 00:19:00.627 "data_offset": 2048, 00:19:00.627 "data_size": 63488 00:19:00.627 }, 00:19:00.627 { 00:19:00.627 "name": "BaseBdev4", 00:19:00.627 "uuid": "0c41ecbc-e54b-4261-97aa-24ea7d962bff", 00:19:00.627 "is_configured": true, 00:19:00.627 "data_offset": 2048, 00:19:00.627 "data_size": 63488 00:19:00.627 } 00:19:00.627 ] 00:19:00.627 } 00:19:00.627 } 00:19:00.627 }' 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:00.627 BaseBdev2 00:19:00.627 BaseBdev3 00:19:00.627 BaseBdev4' 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.627 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:00.628 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.628 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.628 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.886 [2024-11-15 10:47:31.393596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.886 [2024-11-15 10:47:31.393638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.886 [2024-11-15 10:47:31.393743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.886 [2024-11-15 10:47:31.394121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.886 [2024-11-15 10:47:31.394149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83994 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83994 ']' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83994 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83994 00:19:00.886 killing process with pid 83994 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83994' 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83994 00:19:00.886 [2024-11-15 10:47:31.425954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.886 10:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83994 00:19:01.475 [2024-11-15 10:47:31.758764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:02.410 10:47:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:02.410 00:19:02.410 real 0m12.855s 00:19:02.410 user 0m21.487s 00:19:02.410 sys 0m1.739s 00:19:02.410 10:47:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:02.410 10:47:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.410 ************************************ 00:19:02.410 END TEST raid5f_state_function_test_sb 00:19:02.410 ************************************ 00:19:02.410 10:47:32 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:19:02.410 10:47:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:02.410 10:47:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:02.410 10:47:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:02.410 ************************************ 00:19:02.410 START TEST raid5f_superblock_test 00:19:02.410 ************************************ 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84676 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84676 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84676 ']' 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:02.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:02.410 10:47:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.410 [2024-11-15 10:47:32.885236] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:19:02.410 [2024-11-15 10:47:32.885422] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84676 ] 00:19:02.668 [2024-11-15 10:47:33.107421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.668 [2024-11-15 10:47:33.213065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.927 [2024-11-15 10:47:33.397244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.927 [2024-11-15 10:47:33.397325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.494 10:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.494 malloc1 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.494 [2024-11-15 10:47:34.023233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:03.494 [2024-11-15 10:47:34.023309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.494 [2024-11-15 10:47:34.023342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:03.494 [2024-11-15 10:47:34.023392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.494 [2024-11-15 10:47:34.026089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.494 [2024-11-15 10:47:34.026137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:03.494 pt1 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.494 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.753 malloc2 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.753 [2024-11-15 10:47:34.076451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:03.753 [2024-11-15 10:47:34.076526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.753 [2024-11-15 10:47:34.076565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:03.753 [2024-11-15 10:47:34.076580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.753 [2024-11-15 10:47:34.079264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.753 [2024-11-15 10:47:34.079311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:03.753 pt2 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.753 malloc3 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.753 [2024-11-15 10:47:34.149231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:03.753 [2024-11-15 10:47:34.149315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.753 [2024-11-15 10:47:34.149378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:03.753 [2024-11-15 10:47:34.149399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.753 [2024-11-15 10:47:34.152681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.753 [2024-11-15 10:47:34.152736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:03.753 pt3 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.753 malloc4 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:03.753 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.754 [2024-11-15 10:47:34.208501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:03.754 [2024-11-15 10:47:34.208591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.754 [2024-11-15 10:47:34.208630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:03.754 [2024-11-15 10:47:34.208647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.754 [2024-11-15 10:47:34.211873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.754 [2024-11-15 10:47:34.212075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:03.754 pt4 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.754 [2024-11-15 10:47:34.220557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:03.754 [2024-11-15 10:47:34.223272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.754 [2024-11-15 10:47:34.223584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:03.754 [2024-11-15 10:47:34.223689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:03.754 [2024-11-15 10:47:34.224006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:03.754 [2024-11-15 10:47:34.224033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:03.754 [2024-11-15 10:47:34.224450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:03.754 [2024-11-15 10:47:34.232717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:03.754 [2024-11-15 10:47:34.232755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:03.754 [2024-11-15 10:47:34.233056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.754 "name": "raid_bdev1", 00:19:03.754 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:03.754 "strip_size_kb": 64, 00:19:03.754 "state": "online", 00:19:03.754 "raid_level": "raid5f", 00:19:03.754 "superblock": true, 00:19:03.754 "num_base_bdevs": 4, 00:19:03.754 "num_base_bdevs_discovered": 4, 00:19:03.754 "num_base_bdevs_operational": 4, 00:19:03.754 "base_bdevs_list": [ 00:19:03.754 { 00:19:03.754 "name": "pt1", 00:19:03.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.754 "is_configured": true, 00:19:03.754 "data_offset": 2048, 00:19:03.754 "data_size": 63488 00:19:03.754 }, 00:19:03.754 { 00:19:03.754 "name": "pt2", 00:19:03.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.754 "is_configured": true, 00:19:03.754 "data_offset": 2048, 00:19:03.754 "data_size": 63488 00:19:03.754 }, 00:19:03.754 { 00:19:03.754 "name": "pt3", 00:19:03.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:03.754 "is_configured": true, 00:19:03.754 "data_offset": 2048, 00:19:03.754 "data_size": 63488 00:19:03.754 }, 00:19:03.754 { 00:19:03.754 "name": "pt4", 00:19:03.754 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:03.754 "is_configured": true, 00:19:03.754 "data_offset": 2048, 00:19:03.754 "data_size": 63488 00:19:03.754 } 00:19:03.754 ] 00:19:03.754 }' 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.754 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.320 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:04.320 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:04.320 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 [2024-11-15 10:47:34.749742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:04.321 "name": "raid_bdev1", 00:19:04.321 "aliases": [ 00:19:04.321 "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785" 00:19:04.321 ], 00:19:04.321 "product_name": "Raid Volume", 00:19:04.321 "block_size": 512, 00:19:04.321 "num_blocks": 190464, 00:19:04.321 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:04.321 "assigned_rate_limits": { 00:19:04.321 "rw_ios_per_sec": 0, 00:19:04.321 "rw_mbytes_per_sec": 0, 00:19:04.321 "r_mbytes_per_sec": 0, 00:19:04.321 "w_mbytes_per_sec": 0 00:19:04.321 }, 00:19:04.321 "claimed": false, 00:19:04.321 "zoned": false, 00:19:04.321 "supported_io_types": { 00:19:04.321 "read": true, 00:19:04.321 "write": true, 00:19:04.321 "unmap": false, 00:19:04.321 "flush": false, 00:19:04.321 "reset": true, 00:19:04.321 "nvme_admin": false, 00:19:04.321 "nvme_io": false, 00:19:04.321 "nvme_io_md": false, 00:19:04.321 "write_zeroes": true, 00:19:04.321 "zcopy": false, 00:19:04.321 "get_zone_info": false, 00:19:04.321 "zone_management": false, 00:19:04.321 "zone_append": false, 00:19:04.321 "compare": false, 00:19:04.321 "compare_and_write": false, 00:19:04.321 "abort": false, 00:19:04.321 "seek_hole": false, 00:19:04.321 "seek_data": false, 00:19:04.321 "copy": false, 00:19:04.321 "nvme_iov_md": false 00:19:04.321 }, 00:19:04.321 "driver_specific": { 00:19:04.321 "raid": { 00:19:04.321 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:04.321 "strip_size_kb": 64, 00:19:04.321 "state": "online", 00:19:04.321 "raid_level": "raid5f", 00:19:04.321 "superblock": true, 00:19:04.321 "num_base_bdevs": 4, 00:19:04.321 "num_base_bdevs_discovered": 4, 00:19:04.321 "num_base_bdevs_operational": 4, 00:19:04.321 "base_bdevs_list": [ 00:19:04.321 { 00:19:04.321 "name": "pt1", 00:19:04.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.321 "is_configured": true, 00:19:04.321 "data_offset": 2048, 00:19:04.321 "data_size": 63488 00:19:04.321 }, 00:19:04.321 { 00:19:04.321 "name": "pt2", 00:19:04.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.321 "is_configured": true, 00:19:04.321 "data_offset": 2048, 00:19:04.321 "data_size": 63488 00:19:04.321 }, 00:19:04.321 { 00:19:04.321 "name": "pt3", 00:19:04.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:04.321 "is_configured": true, 00:19:04.321 "data_offset": 2048, 00:19:04.321 "data_size": 63488 00:19:04.321 }, 00:19:04.321 { 00:19:04.321 "name": "pt4", 00:19:04.321 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:04.321 "is_configured": true, 00:19:04.321 "data_offset": 2048, 00:19:04.321 "data_size": 63488 00:19:04.321 } 00:19:04.321 ] 00:19:04.321 } 00:19:04.321 } 00:19:04.321 }' 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:04.321 pt2 00:19:04.321 pt3 00:19:04.321 pt4' 00:19:04.321 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.579 10:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.579 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.838 [2024-11-15 10:47:35.137853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.838 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.838 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=763f7038-9ddb-4bcc-8dd8-2fc7dd65b785 00:19:04.838 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 763f7038-9ddb-4bcc-8dd8-2fc7dd65b785 ']' 00:19:04.838 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:04.838 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.838 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.838 [2024-11-15 10:47:35.181623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.838 [2024-11-15 10:47:35.181661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.838 [2024-11-15 10:47:35.181768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.838 [2024-11-15 10:47:35.181880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.838 [2024-11-15 10:47:35.181904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:04.838 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.838 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.839 [2024-11-15 10:47:35.361715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:04.839 [2024-11-15 10:47:35.364169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:04.839 [2024-11-15 10:47:35.364242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:04.839 [2024-11-15 10:47:35.364295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:04.839 [2024-11-15 10:47:35.364393] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:04.839 [2024-11-15 10:47:35.364467] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:04.839 [2024-11-15 10:47:35.364501] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:04.839 [2024-11-15 10:47:35.364532] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:04.839 [2024-11-15 10:47:35.364554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.839 [2024-11-15 10:47:35.364570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:04.839 request: 00:19:04.839 { 00:19:04.839 "name": "raid_bdev1", 00:19:04.839 "raid_level": "raid5f", 00:19:04.839 "base_bdevs": [ 00:19:04.839 "malloc1", 00:19:04.839 "malloc2", 00:19:04.839 "malloc3", 00:19:04.839 "malloc4" 00:19:04.839 ], 00:19:04.839 "strip_size_kb": 64, 00:19:04.839 "superblock": false, 00:19:04.839 "method": "bdev_raid_create", 00:19:04.839 "req_id": 1 00:19:04.839 } 00:19:04.839 Got JSON-RPC error response 00:19:04.839 response: 00:19:04.839 { 00:19:04.839 "code": -17, 00:19:04.839 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:04.839 } 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.839 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.098 [2024-11-15 10:47:35.441680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:05.098 [2024-11-15 10:47:35.441891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.098 [2024-11-15 10:47:35.441975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:05.098 [2024-11-15 10:47:35.442130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.098 [2024-11-15 10:47:35.444904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.098 [2024-11-15 10:47:35.445068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:05.098 [2024-11-15 10:47:35.445272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:05.098 [2024-11-15 10:47:35.445456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:05.098 pt1 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.098 "name": "raid_bdev1", 00:19:05.098 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:05.098 "strip_size_kb": 64, 00:19:05.098 "state": "configuring", 00:19:05.098 "raid_level": "raid5f", 00:19:05.098 "superblock": true, 00:19:05.098 "num_base_bdevs": 4, 00:19:05.098 "num_base_bdevs_discovered": 1, 00:19:05.098 "num_base_bdevs_operational": 4, 00:19:05.098 "base_bdevs_list": [ 00:19:05.098 { 00:19:05.098 "name": "pt1", 00:19:05.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.098 "is_configured": true, 00:19:05.098 "data_offset": 2048, 00:19:05.098 "data_size": 63488 00:19:05.098 }, 00:19:05.098 { 00:19:05.098 "name": null, 00:19:05.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.098 "is_configured": false, 00:19:05.098 "data_offset": 2048, 00:19:05.098 "data_size": 63488 00:19:05.098 }, 00:19:05.098 { 00:19:05.098 "name": null, 00:19:05.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:05.098 "is_configured": false, 00:19:05.098 "data_offset": 2048, 00:19:05.098 "data_size": 63488 00:19:05.098 }, 00:19:05.098 { 00:19:05.098 "name": null, 00:19:05.098 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:05.098 "is_configured": false, 00:19:05.098 "data_offset": 2048, 00:19:05.098 "data_size": 63488 00:19:05.098 } 00:19:05.098 ] 00:19:05.098 }' 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.098 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.666 [2024-11-15 10:47:35.937961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.666 [2024-11-15 10:47:35.938055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.666 [2024-11-15 10:47:35.938087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:05.666 [2024-11-15 10:47:35.938104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.666 [2024-11-15 10:47:35.938657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.666 [2024-11-15 10:47:35.938695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.666 [2024-11-15 10:47:35.938798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:05.666 [2024-11-15 10:47:35.938836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.666 pt2 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.666 [2024-11-15 10:47:35.945942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.666 10:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.666 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.666 "name": "raid_bdev1", 00:19:05.666 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:05.666 "strip_size_kb": 64, 00:19:05.666 "state": "configuring", 00:19:05.666 "raid_level": "raid5f", 00:19:05.666 "superblock": true, 00:19:05.666 "num_base_bdevs": 4, 00:19:05.666 "num_base_bdevs_discovered": 1, 00:19:05.666 "num_base_bdevs_operational": 4, 00:19:05.666 "base_bdevs_list": [ 00:19:05.666 { 00:19:05.666 "name": "pt1", 00:19:05.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.666 "is_configured": true, 00:19:05.666 "data_offset": 2048, 00:19:05.666 "data_size": 63488 00:19:05.666 }, 00:19:05.666 { 00:19:05.666 "name": null, 00:19:05.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.666 "is_configured": false, 00:19:05.666 "data_offset": 0, 00:19:05.666 "data_size": 63488 00:19:05.666 }, 00:19:05.666 { 00:19:05.666 "name": null, 00:19:05.666 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:05.666 "is_configured": false, 00:19:05.666 "data_offset": 2048, 00:19:05.666 "data_size": 63488 00:19:05.666 }, 00:19:05.666 { 00:19:05.666 "name": null, 00:19:05.666 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:05.666 "is_configured": false, 00:19:05.666 "data_offset": 2048, 00:19:05.666 "data_size": 63488 00:19:05.666 } 00:19:05.666 ] 00:19:05.666 }' 00:19:05.666 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.666 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.925 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:05.925 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.925 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.925 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.925 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.925 [2024-11-15 10:47:36.434131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.925 [2024-11-15 10:47:36.434212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.925 [2024-11-15 10:47:36.434244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:05.925 [2024-11-15 10:47:36.434259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.925 [2024-11-15 10:47:36.434837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.926 [2024-11-15 10:47:36.434863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.926 [2024-11-15 10:47:36.434983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:05.926 [2024-11-15 10:47:36.435018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.926 pt2 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.926 [2024-11-15 10:47:36.446081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:05.926 [2024-11-15 10:47:36.446157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.926 [2024-11-15 10:47:36.446192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:05.926 [2024-11-15 10:47:36.446210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.926 [2024-11-15 10:47:36.446699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.926 [2024-11-15 10:47:36.446724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:05.926 [2024-11-15 10:47:36.446821] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:05.926 [2024-11-15 10:47:36.446859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:05.926 pt3 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.926 [2024-11-15 10:47:36.454054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:05.926 [2024-11-15 10:47:36.454109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.926 [2024-11-15 10:47:36.454135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:05.926 [2024-11-15 10:47:36.454149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.926 [2024-11-15 10:47:36.454658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.926 [2024-11-15 10:47:36.454690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:05.926 [2024-11-15 10:47:36.454773] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:05.926 [2024-11-15 10:47:36.454803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:05.926 [2024-11-15 10:47:36.454993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:05.926 [2024-11-15 10:47:36.455010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:05.926 [2024-11-15 10:47:36.455311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:05.926 [2024-11-15 10:47:36.461592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:05.926 [2024-11-15 10:47:36.461623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:05.926 [2024-11-15 10:47:36.461850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.926 pt4 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.926 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.184 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.184 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.184 "name": "raid_bdev1", 00:19:06.184 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:06.184 "strip_size_kb": 64, 00:19:06.184 "state": "online", 00:19:06.184 "raid_level": "raid5f", 00:19:06.184 "superblock": true, 00:19:06.184 "num_base_bdevs": 4, 00:19:06.184 "num_base_bdevs_discovered": 4, 00:19:06.184 "num_base_bdevs_operational": 4, 00:19:06.184 "base_bdevs_list": [ 00:19:06.184 { 00:19:06.184 "name": "pt1", 00:19:06.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.184 "is_configured": true, 00:19:06.184 "data_offset": 2048, 00:19:06.184 "data_size": 63488 00:19:06.184 }, 00:19:06.184 { 00:19:06.184 "name": "pt2", 00:19:06.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.184 "is_configured": true, 00:19:06.184 "data_offset": 2048, 00:19:06.184 "data_size": 63488 00:19:06.184 }, 00:19:06.184 { 00:19:06.184 "name": "pt3", 00:19:06.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.184 "is_configured": true, 00:19:06.184 "data_offset": 2048, 00:19:06.184 "data_size": 63488 00:19:06.184 }, 00:19:06.184 { 00:19:06.184 "name": "pt4", 00:19:06.184 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.185 "is_configured": true, 00:19:06.185 "data_offset": 2048, 00:19:06.185 "data_size": 63488 00:19:06.185 } 00:19:06.185 ] 00:19:06.185 }' 00:19:06.185 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.185 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.443 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:06.443 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:06.443 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:06.443 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:06.443 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:06.443 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:06.443 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.443 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.443 10:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.443 10:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:06.443 [2024-11-15 10:47:36.993033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.701 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.701 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.701 "name": "raid_bdev1", 00:19:06.701 "aliases": [ 00:19:06.701 "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785" 00:19:06.701 ], 00:19:06.701 "product_name": "Raid Volume", 00:19:06.701 "block_size": 512, 00:19:06.701 "num_blocks": 190464, 00:19:06.701 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:06.701 "assigned_rate_limits": { 00:19:06.701 "rw_ios_per_sec": 0, 00:19:06.701 "rw_mbytes_per_sec": 0, 00:19:06.701 "r_mbytes_per_sec": 0, 00:19:06.701 "w_mbytes_per_sec": 0 00:19:06.701 }, 00:19:06.701 "claimed": false, 00:19:06.702 "zoned": false, 00:19:06.702 "supported_io_types": { 00:19:06.702 "read": true, 00:19:06.702 "write": true, 00:19:06.702 "unmap": false, 00:19:06.702 "flush": false, 00:19:06.702 "reset": true, 00:19:06.702 "nvme_admin": false, 00:19:06.702 "nvme_io": false, 00:19:06.702 "nvme_io_md": false, 00:19:06.702 "write_zeroes": true, 00:19:06.702 "zcopy": false, 00:19:06.702 "get_zone_info": false, 00:19:06.702 "zone_management": false, 00:19:06.702 "zone_append": false, 00:19:06.702 "compare": false, 00:19:06.702 "compare_and_write": false, 00:19:06.702 "abort": false, 00:19:06.702 "seek_hole": false, 00:19:06.702 "seek_data": false, 00:19:06.702 "copy": false, 00:19:06.702 "nvme_iov_md": false 00:19:06.702 }, 00:19:06.702 "driver_specific": { 00:19:06.702 "raid": { 00:19:06.702 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:06.702 "strip_size_kb": 64, 00:19:06.702 "state": "online", 00:19:06.702 "raid_level": "raid5f", 00:19:06.702 "superblock": true, 00:19:06.702 "num_base_bdevs": 4, 00:19:06.702 "num_base_bdevs_discovered": 4, 00:19:06.702 "num_base_bdevs_operational": 4, 00:19:06.702 "base_bdevs_list": [ 00:19:06.702 { 00:19:06.702 "name": "pt1", 00:19:06.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.702 "is_configured": true, 00:19:06.702 "data_offset": 2048, 00:19:06.702 "data_size": 63488 00:19:06.702 }, 00:19:06.702 { 00:19:06.702 "name": "pt2", 00:19:06.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.702 "is_configured": true, 00:19:06.702 "data_offset": 2048, 00:19:06.702 "data_size": 63488 00:19:06.702 }, 00:19:06.702 { 00:19:06.702 "name": "pt3", 00:19:06.702 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.702 "is_configured": true, 00:19:06.702 "data_offset": 2048, 00:19:06.702 "data_size": 63488 00:19:06.702 }, 00:19:06.702 { 00:19:06.702 "name": "pt4", 00:19:06.702 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.702 "is_configured": true, 00:19:06.702 "data_offset": 2048, 00:19:06.702 "data_size": 63488 00:19:06.702 } 00:19:06.702 ] 00:19:06.702 } 00:19:06.702 } 00:19:06.702 }' 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:06.702 pt2 00:19:06.702 pt3 00:19:06.702 pt4' 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.702 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.960 [2024-11-15 10:47:37.389272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.960 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 763f7038-9ddb-4bcc-8dd8-2fc7dd65b785 '!=' 763f7038-9ddb-4bcc-8dd8-2fc7dd65b785 ']' 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.961 [2024-11-15 10:47:37.437108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.961 "name": "raid_bdev1", 00:19:06.961 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:06.961 "strip_size_kb": 64, 00:19:06.961 "state": "online", 00:19:06.961 "raid_level": "raid5f", 00:19:06.961 "superblock": true, 00:19:06.961 "num_base_bdevs": 4, 00:19:06.961 "num_base_bdevs_discovered": 3, 00:19:06.961 "num_base_bdevs_operational": 3, 00:19:06.961 "base_bdevs_list": [ 00:19:06.961 { 00:19:06.961 "name": null, 00:19:06.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.961 "is_configured": false, 00:19:06.961 "data_offset": 0, 00:19:06.961 "data_size": 63488 00:19:06.961 }, 00:19:06.961 { 00:19:06.961 "name": "pt2", 00:19:06.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.961 "is_configured": true, 00:19:06.961 "data_offset": 2048, 00:19:06.961 "data_size": 63488 00:19:06.961 }, 00:19:06.961 { 00:19:06.961 "name": "pt3", 00:19:06.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.961 "is_configured": true, 00:19:06.961 "data_offset": 2048, 00:19:06.961 "data_size": 63488 00:19:06.961 }, 00:19:06.961 { 00:19:06.961 "name": "pt4", 00:19:06.961 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.961 "is_configured": true, 00:19:06.961 "data_offset": 2048, 00:19:06.961 "data_size": 63488 00:19:06.961 } 00:19:06.961 ] 00:19:06.961 }' 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.961 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.531 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.531 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.531 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.531 [2024-11-15 10:47:37.957125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.531 [2024-11-15 10:47:37.957170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.531 [2024-11-15 10:47:37.957273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.531 [2024-11-15 10:47:37.957424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.531 [2024-11-15 10:47:37.957454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:07.531 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.531 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.531 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.531 10:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:07.531 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.531 10:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.531 [2024-11-15 10:47:38.049134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.531 [2024-11-15 10:47:38.049212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.531 [2024-11-15 10:47:38.049243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:07.531 [2024-11-15 10:47:38.049259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.531 [2024-11-15 10:47:38.051998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.531 [2024-11-15 10:47:38.052181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.531 [2024-11-15 10:47:38.052316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:07.531 [2024-11-15 10:47:38.052396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.531 pt2 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.531 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.794 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.794 "name": "raid_bdev1", 00:19:07.794 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:07.794 "strip_size_kb": 64, 00:19:07.794 "state": "configuring", 00:19:07.794 "raid_level": "raid5f", 00:19:07.794 "superblock": true, 00:19:07.794 "num_base_bdevs": 4, 00:19:07.794 "num_base_bdevs_discovered": 1, 00:19:07.794 "num_base_bdevs_operational": 3, 00:19:07.794 "base_bdevs_list": [ 00:19:07.794 { 00:19:07.794 "name": null, 00:19:07.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.794 "is_configured": false, 00:19:07.794 "data_offset": 2048, 00:19:07.794 "data_size": 63488 00:19:07.794 }, 00:19:07.794 { 00:19:07.794 "name": "pt2", 00:19:07.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.794 "is_configured": true, 00:19:07.794 "data_offset": 2048, 00:19:07.794 "data_size": 63488 00:19:07.794 }, 00:19:07.794 { 00:19:07.794 "name": null, 00:19:07.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:07.794 "is_configured": false, 00:19:07.794 "data_offset": 2048, 00:19:07.794 "data_size": 63488 00:19:07.794 }, 00:19:07.794 { 00:19:07.794 "name": null, 00:19:07.794 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:07.794 "is_configured": false, 00:19:07.794 "data_offset": 2048, 00:19:07.794 "data_size": 63488 00:19:07.794 } 00:19:07.794 ] 00:19:07.794 }' 00:19:07.794 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.794 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.052 [2024-11-15 10:47:38.589287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:08.052 [2024-11-15 10:47:38.589414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.052 [2024-11-15 10:47:38.589455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:08.052 [2024-11-15 10:47:38.589471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.052 [2024-11-15 10:47:38.589997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.052 [2024-11-15 10:47:38.590029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:08.052 [2024-11-15 10:47:38.590144] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:08.052 [2024-11-15 10:47:38.590201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:08.052 pt3 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.052 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.310 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.310 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.310 "name": "raid_bdev1", 00:19:08.310 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:08.310 "strip_size_kb": 64, 00:19:08.310 "state": "configuring", 00:19:08.310 "raid_level": "raid5f", 00:19:08.310 "superblock": true, 00:19:08.310 "num_base_bdevs": 4, 00:19:08.310 "num_base_bdevs_discovered": 2, 00:19:08.310 "num_base_bdevs_operational": 3, 00:19:08.310 "base_bdevs_list": [ 00:19:08.310 { 00:19:08.310 "name": null, 00:19:08.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.310 "is_configured": false, 00:19:08.310 "data_offset": 2048, 00:19:08.310 "data_size": 63488 00:19:08.310 }, 00:19:08.310 { 00:19:08.310 "name": "pt2", 00:19:08.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.310 "is_configured": true, 00:19:08.310 "data_offset": 2048, 00:19:08.310 "data_size": 63488 00:19:08.310 }, 00:19:08.310 { 00:19:08.310 "name": "pt3", 00:19:08.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:08.310 "is_configured": true, 00:19:08.310 "data_offset": 2048, 00:19:08.310 "data_size": 63488 00:19:08.310 }, 00:19:08.310 { 00:19:08.310 "name": null, 00:19:08.310 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:08.310 "is_configured": false, 00:19:08.310 "data_offset": 2048, 00:19:08.310 "data_size": 63488 00:19:08.310 } 00:19:08.310 ] 00:19:08.310 }' 00:19:08.310 10:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.310 10:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.875 [2024-11-15 10:47:39.141462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:08.875 [2024-11-15 10:47:39.141555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.875 [2024-11-15 10:47:39.141590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:08.875 [2024-11-15 10:47:39.141605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.875 [2024-11-15 10:47:39.142150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.875 [2024-11-15 10:47:39.142176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:08.875 [2024-11-15 10:47:39.142282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:08.875 [2024-11-15 10:47:39.142316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:08.875 [2024-11-15 10:47:39.142519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:08.875 [2024-11-15 10:47:39.142536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:08.875 [2024-11-15 10:47:39.142841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:08.875 [2024-11-15 10:47:39.149166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:08.875 [2024-11-15 10:47:39.149198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:08.875 [2024-11-15 10:47:39.149544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.875 pt4 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.875 "name": "raid_bdev1", 00:19:08.875 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:08.875 "strip_size_kb": 64, 00:19:08.875 "state": "online", 00:19:08.875 "raid_level": "raid5f", 00:19:08.875 "superblock": true, 00:19:08.875 "num_base_bdevs": 4, 00:19:08.875 "num_base_bdevs_discovered": 3, 00:19:08.875 "num_base_bdevs_operational": 3, 00:19:08.875 "base_bdevs_list": [ 00:19:08.875 { 00:19:08.875 "name": null, 00:19:08.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.875 "is_configured": false, 00:19:08.875 "data_offset": 2048, 00:19:08.875 "data_size": 63488 00:19:08.875 }, 00:19:08.875 { 00:19:08.875 "name": "pt2", 00:19:08.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.875 "is_configured": true, 00:19:08.875 "data_offset": 2048, 00:19:08.875 "data_size": 63488 00:19:08.875 }, 00:19:08.875 { 00:19:08.875 "name": "pt3", 00:19:08.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:08.875 "is_configured": true, 00:19:08.875 "data_offset": 2048, 00:19:08.875 "data_size": 63488 00:19:08.875 }, 00:19:08.875 { 00:19:08.875 "name": "pt4", 00:19:08.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:08.875 "is_configured": true, 00:19:08.875 "data_offset": 2048, 00:19:08.875 "data_size": 63488 00:19:08.875 } 00:19:08.875 ] 00:19:08.875 }' 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.875 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.133 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:09.133 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.133 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.133 [2024-11-15 10:47:39.680464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:09.133 [2024-11-15 10:47:39.680500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.133 [2024-11-15 10:47:39.680602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.133 [2024-11-15 10:47:39.680703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.133 [2024-11-15 10:47:39.680724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:09.133 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.133 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.133 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:09.133 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.133 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.392 [2024-11-15 10:47:39.740492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:09.392 [2024-11-15 10:47:39.740592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.392 [2024-11-15 10:47:39.740632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:09.392 [2024-11-15 10:47:39.740650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.392 [2024-11-15 10:47:39.743600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.392 [2024-11-15 10:47:39.743665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:09.392 [2024-11-15 10:47:39.743791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:09.392 [2024-11-15 10:47:39.743858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:09.392 [2024-11-15 10:47:39.744028] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:09.392 [2024-11-15 10:47:39.744052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:09.392 [2024-11-15 10:47:39.744074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:09.392 [2024-11-15 10:47:39.744146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:09.392 [2024-11-15 10:47:39.744306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:09.392 pt1 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.392 "name": "raid_bdev1", 00:19:09.392 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:09.392 "strip_size_kb": 64, 00:19:09.392 "state": "configuring", 00:19:09.392 "raid_level": "raid5f", 00:19:09.392 "superblock": true, 00:19:09.392 "num_base_bdevs": 4, 00:19:09.392 "num_base_bdevs_discovered": 2, 00:19:09.392 "num_base_bdevs_operational": 3, 00:19:09.392 "base_bdevs_list": [ 00:19:09.392 { 00:19:09.392 "name": null, 00:19:09.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.392 "is_configured": false, 00:19:09.392 "data_offset": 2048, 00:19:09.392 "data_size": 63488 00:19:09.392 }, 00:19:09.392 { 00:19:09.392 "name": "pt2", 00:19:09.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.392 "is_configured": true, 00:19:09.392 "data_offset": 2048, 00:19:09.392 "data_size": 63488 00:19:09.392 }, 00:19:09.392 { 00:19:09.392 "name": "pt3", 00:19:09.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:09.392 "is_configured": true, 00:19:09.392 "data_offset": 2048, 00:19:09.392 "data_size": 63488 00:19:09.392 }, 00:19:09.392 { 00:19:09.392 "name": null, 00:19:09.392 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:09.392 "is_configured": false, 00:19:09.392 "data_offset": 2048, 00:19:09.392 "data_size": 63488 00:19:09.392 } 00:19:09.392 ] 00:19:09.392 }' 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.392 10:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.959 [2024-11-15 10:47:40.356717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:09.959 [2024-11-15 10:47:40.356805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.959 [2024-11-15 10:47:40.356841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:09.959 [2024-11-15 10:47:40.356856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.959 [2024-11-15 10:47:40.357433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.959 [2024-11-15 10:47:40.357460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:09.959 [2024-11-15 10:47:40.357571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:09.959 [2024-11-15 10:47:40.357605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:09.959 [2024-11-15 10:47:40.357820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:09.959 [2024-11-15 10:47:40.357838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:09.959 [2024-11-15 10:47:40.358166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:09.959 [2024-11-15 10:47:40.364608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:09.959 [2024-11-15 10:47:40.364820] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:09.959 [2024-11-15 10:47:40.365234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.959 pt4 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.959 "name": "raid_bdev1", 00:19:09.959 "uuid": "763f7038-9ddb-4bcc-8dd8-2fc7dd65b785", 00:19:09.959 "strip_size_kb": 64, 00:19:09.959 "state": "online", 00:19:09.959 "raid_level": "raid5f", 00:19:09.959 "superblock": true, 00:19:09.959 "num_base_bdevs": 4, 00:19:09.959 "num_base_bdevs_discovered": 3, 00:19:09.959 "num_base_bdevs_operational": 3, 00:19:09.959 "base_bdevs_list": [ 00:19:09.959 { 00:19:09.959 "name": null, 00:19:09.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.959 "is_configured": false, 00:19:09.959 "data_offset": 2048, 00:19:09.959 "data_size": 63488 00:19:09.959 }, 00:19:09.959 { 00:19:09.959 "name": "pt2", 00:19:09.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.959 "is_configured": true, 00:19:09.959 "data_offset": 2048, 00:19:09.959 "data_size": 63488 00:19:09.959 }, 00:19:09.959 { 00:19:09.959 "name": "pt3", 00:19:09.959 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:09.959 "is_configured": true, 00:19:09.959 "data_offset": 2048, 00:19:09.959 "data_size": 63488 00:19:09.959 }, 00:19:09.959 { 00:19:09.959 "name": "pt4", 00:19:09.959 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:09.959 "is_configured": true, 00:19:09.959 "data_offset": 2048, 00:19:09.959 "data_size": 63488 00:19:09.959 } 00:19:09.959 ] 00:19:09.959 }' 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.959 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.526 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:10.527 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:10.527 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.527 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.527 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.527 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:10.527 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:10.527 10:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:10.527 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.527 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.527 [2024-11-15 10:47:40.952565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.527 10:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 763f7038-9ddb-4bcc-8dd8-2fc7dd65b785 '!=' 763f7038-9ddb-4bcc-8dd8-2fc7dd65b785 ']' 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84676 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84676 ']' 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84676 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84676 00:19:10.527 killing process with pid 84676 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84676' 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84676 00:19:10.527 [2024-11-15 10:47:41.038837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.527 10:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84676 00:19:10.527 [2024-11-15 10:47:41.038980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.527 [2024-11-15 10:47:41.039086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.527 [2024-11-15 10:47:41.039107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:11.093 [2024-11-15 10:47:41.460668] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:12.029 ************************************ 00:19:12.029 END TEST raid5f_superblock_test 00:19:12.029 ************************************ 00:19:12.029 10:47:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:12.029 00:19:12.029 real 0m9.698s 00:19:12.029 user 0m16.061s 00:19:12.029 sys 0m1.185s 00:19:12.029 10:47:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:12.029 10:47:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.029 10:47:42 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:12.029 10:47:42 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:19:12.029 10:47:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:12.029 10:47:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:12.029 10:47:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.029 ************************************ 00:19:12.029 START TEST raid5f_rebuild_test 00:19:12.029 ************************************ 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85166 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85166 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 85166 ']' 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:12.029 10:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.288 [2024-11-15 10:47:42.676601] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:19:12.288 [2024-11-15 10:47:42.677144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:19:12.288 Zero copy mechanism will not be used. 00:19:12.288 -allocations --file-prefix=spdk_pid85166 ] 00:19:12.546 [2024-11-15 10:47:42.860784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.546 [2024-11-15 10:47:42.967701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.805 [2024-11-15 10:47:43.152811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.805 [2024-11-15 10:47:43.153089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.371 BaseBdev1_malloc 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.371 [2024-11-15 10:47:43.830319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:13.371 [2024-11-15 10:47:43.830421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.371 [2024-11-15 10:47:43.830455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:13.371 [2024-11-15 10:47:43.830474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.371 [2024-11-15 10:47:43.833245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.371 [2024-11-15 10:47:43.833305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:13.371 BaseBdev1 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.371 BaseBdev2_malloc 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.371 [2024-11-15 10:47:43.886341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:13.371 [2024-11-15 10:47:43.886493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.371 [2024-11-15 10:47:43.886553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:13.371 [2024-11-15 10:47:43.886586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.371 [2024-11-15 10:47:43.890310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.371 [2024-11-15 10:47:43.890611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:13.371 BaseBdev2 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.371 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.630 BaseBdev3_malloc 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.630 [2024-11-15 10:47:43.949542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:13.630 [2024-11-15 10:47:43.949618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.630 [2024-11-15 10:47:43.949650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:13.630 [2024-11-15 10:47:43.949668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.630 [2024-11-15 10:47:43.952332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.630 [2024-11-15 10:47:43.952402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:13.630 BaseBdev3 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.630 BaseBdev4_malloc 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.630 10:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.630 [2024-11-15 10:47:43.998136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:13.630 [2024-11-15 10:47:43.998230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.630 [2024-11-15 10:47:43.998262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:13.630 [2024-11-15 10:47:43.998279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.630 [2024-11-15 10:47:44.001020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.630 [2024-11-15 10:47:44.001080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:13.630 BaseBdev4 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.630 spare_malloc 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.630 spare_delay 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.630 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.630 [2024-11-15 10:47:44.066763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:13.630 [2024-11-15 10:47:44.066835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.630 [2024-11-15 10:47:44.066864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:13.630 [2024-11-15 10:47:44.066881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.631 [2024-11-15 10:47:44.069508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.631 [2024-11-15 10:47:44.069559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:13.631 spare 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 [2024-11-15 10:47:44.074824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.631 [2024-11-15 10:47:44.077091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.631 [2024-11-15 10:47:44.077182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.631 [2024-11-15 10:47:44.077265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:13.631 [2024-11-15 10:47:44.077431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:13.631 [2024-11-15 10:47:44.077453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:13.631 [2024-11-15 10:47:44.077801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:13.631 [2024-11-15 10:47:44.084701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:13.631 [2024-11-15 10:47:44.084874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:13.631 [2024-11-15 10:47:44.085296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.631 "name": "raid_bdev1", 00:19:13.631 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:13.631 "strip_size_kb": 64, 00:19:13.631 "state": "online", 00:19:13.631 "raid_level": "raid5f", 00:19:13.631 "superblock": false, 00:19:13.631 "num_base_bdevs": 4, 00:19:13.631 "num_base_bdevs_discovered": 4, 00:19:13.631 "num_base_bdevs_operational": 4, 00:19:13.631 "base_bdevs_list": [ 00:19:13.631 { 00:19:13.631 "name": "BaseBdev1", 00:19:13.631 "uuid": "6fbc8306-ea6b-5fb9-8974-9f6b6f3b26a5", 00:19:13.631 "is_configured": true, 00:19:13.631 "data_offset": 0, 00:19:13.631 "data_size": 65536 00:19:13.631 }, 00:19:13.631 { 00:19:13.631 "name": "BaseBdev2", 00:19:13.631 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:13.631 "is_configured": true, 00:19:13.631 "data_offset": 0, 00:19:13.631 "data_size": 65536 00:19:13.631 }, 00:19:13.631 { 00:19:13.631 "name": "BaseBdev3", 00:19:13.631 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:13.631 "is_configured": true, 00:19:13.631 "data_offset": 0, 00:19:13.631 "data_size": 65536 00:19:13.631 }, 00:19:13.631 { 00:19:13.631 "name": "BaseBdev4", 00:19:13.631 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:13.631 "is_configured": true, 00:19:13.631 "data_offset": 0, 00:19:13.631 "data_size": 65536 00:19:13.631 } 00:19:13.631 ] 00:19:13.631 }' 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.631 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.197 [2024-11-15 10:47:44.620678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:14.197 10:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:14.763 [2024-11-15 10:47:45.024597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:14.763 /dev/nbd0 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:14.763 1+0 records in 00:19:14.763 1+0 records out 00:19:14.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529109 s, 7.7 MB/s 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:14.763 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:19:15.329 512+0 records in 00:19:15.329 512+0 records out 00:19:15.329 100663296 bytes (101 MB, 96 MiB) copied, 0.715664 s, 141 MB/s 00:19:15.329 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:15.329 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:15.329 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:15.329 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:15.329 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:15.329 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:15.329 10:47:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:15.893 [2024-11-15 10:47:46.233937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.893 [2024-11-15 10:47:46.249237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.893 "name": "raid_bdev1", 00:19:15.893 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:15.893 "strip_size_kb": 64, 00:19:15.893 "state": "online", 00:19:15.893 "raid_level": "raid5f", 00:19:15.893 "superblock": false, 00:19:15.893 "num_base_bdevs": 4, 00:19:15.893 "num_base_bdevs_discovered": 3, 00:19:15.893 "num_base_bdevs_operational": 3, 00:19:15.893 "base_bdevs_list": [ 00:19:15.893 { 00:19:15.893 "name": null, 00:19:15.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.893 "is_configured": false, 00:19:15.893 "data_offset": 0, 00:19:15.893 "data_size": 65536 00:19:15.893 }, 00:19:15.893 { 00:19:15.893 "name": "BaseBdev2", 00:19:15.893 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:15.893 "is_configured": true, 00:19:15.893 "data_offset": 0, 00:19:15.893 "data_size": 65536 00:19:15.893 }, 00:19:15.893 { 00:19:15.893 "name": "BaseBdev3", 00:19:15.893 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:15.893 "is_configured": true, 00:19:15.893 "data_offset": 0, 00:19:15.893 "data_size": 65536 00:19:15.893 }, 00:19:15.893 { 00:19:15.893 "name": "BaseBdev4", 00:19:15.893 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:15.893 "is_configured": true, 00:19:15.893 "data_offset": 0, 00:19:15.893 "data_size": 65536 00:19:15.893 } 00:19:15.893 ] 00:19:15.893 }' 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.893 10:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.459 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:16.459 10:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.459 10:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.459 [2024-11-15 10:47:46.801399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.459 [2024-11-15 10:47:46.815621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:16.459 10:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.459 10:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:16.459 [2024-11-15 10:47:46.824758] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.392 "name": "raid_bdev1", 00:19:17.392 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:17.392 "strip_size_kb": 64, 00:19:17.392 "state": "online", 00:19:17.392 "raid_level": "raid5f", 00:19:17.392 "superblock": false, 00:19:17.392 "num_base_bdevs": 4, 00:19:17.392 "num_base_bdevs_discovered": 4, 00:19:17.392 "num_base_bdevs_operational": 4, 00:19:17.392 "process": { 00:19:17.392 "type": "rebuild", 00:19:17.392 "target": "spare", 00:19:17.392 "progress": { 00:19:17.392 "blocks": 17280, 00:19:17.392 "percent": 8 00:19:17.392 } 00:19:17.392 }, 00:19:17.392 "base_bdevs_list": [ 00:19:17.392 { 00:19:17.392 "name": "spare", 00:19:17.392 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:17.392 "is_configured": true, 00:19:17.392 "data_offset": 0, 00:19:17.392 "data_size": 65536 00:19:17.392 }, 00:19:17.392 { 00:19:17.392 "name": "BaseBdev2", 00:19:17.392 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:17.392 "is_configured": true, 00:19:17.392 "data_offset": 0, 00:19:17.392 "data_size": 65536 00:19:17.392 }, 00:19:17.392 { 00:19:17.392 "name": "BaseBdev3", 00:19:17.392 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:17.392 "is_configured": true, 00:19:17.392 "data_offset": 0, 00:19:17.392 "data_size": 65536 00:19:17.392 }, 00:19:17.392 { 00:19:17.392 "name": "BaseBdev4", 00:19:17.392 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:17.392 "is_configured": true, 00:19:17.392 "data_offset": 0, 00:19:17.392 "data_size": 65536 00:19:17.392 } 00:19:17.392 ] 00:19:17.392 }' 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.392 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.650 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.650 10:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:17.650 10:47:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.650 10:47:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.650 [2024-11-15 10:47:47.978494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.650 [2024-11-15 10:47:48.036439] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:17.650 [2024-11-15 10:47:48.036570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.650 [2024-11-15 10:47:48.036611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.650 [2024-11-15 10:47:48.036626] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.650 "name": "raid_bdev1", 00:19:17.650 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:17.650 "strip_size_kb": 64, 00:19:17.650 "state": "online", 00:19:17.650 "raid_level": "raid5f", 00:19:17.650 "superblock": false, 00:19:17.650 "num_base_bdevs": 4, 00:19:17.650 "num_base_bdevs_discovered": 3, 00:19:17.650 "num_base_bdevs_operational": 3, 00:19:17.650 "base_bdevs_list": [ 00:19:17.650 { 00:19:17.650 "name": null, 00:19:17.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.650 "is_configured": false, 00:19:17.650 "data_offset": 0, 00:19:17.650 "data_size": 65536 00:19:17.650 }, 00:19:17.650 { 00:19:17.650 "name": "BaseBdev2", 00:19:17.650 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:17.650 "is_configured": true, 00:19:17.650 "data_offset": 0, 00:19:17.650 "data_size": 65536 00:19:17.650 }, 00:19:17.650 { 00:19:17.650 "name": "BaseBdev3", 00:19:17.650 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:17.650 "is_configured": true, 00:19:17.650 "data_offset": 0, 00:19:17.650 "data_size": 65536 00:19:17.650 }, 00:19:17.650 { 00:19:17.650 "name": "BaseBdev4", 00:19:17.650 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:17.650 "is_configured": true, 00:19:17.650 "data_offset": 0, 00:19:17.650 "data_size": 65536 00:19:17.650 } 00:19:17.650 ] 00:19:17.650 }' 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.650 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.217 "name": "raid_bdev1", 00:19:18.217 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:18.217 "strip_size_kb": 64, 00:19:18.217 "state": "online", 00:19:18.217 "raid_level": "raid5f", 00:19:18.217 "superblock": false, 00:19:18.217 "num_base_bdevs": 4, 00:19:18.217 "num_base_bdevs_discovered": 3, 00:19:18.217 "num_base_bdevs_operational": 3, 00:19:18.217 "base_bdevs_list": [ 00:19:18.217 { 00:19:18.217 "name": null, 00:19:18.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.217 "is_configured": false, 00:19:18.217 "data_offset": 0, 00:19:18.217 "data_size": 65536 00:19:18.217 }, 00:19:18.217 { 00:19:18.217 "name": "BaseBdev2", 00:19:18.217 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:18.217 "is_configured": true, 00:19:18.217 "data_offset": 0, 00:19:18.217 "data_size": 65536 00:19:18.217 }, 00:19:18.217 { 00:19:18.217 "name": "BaseBdev3", 00:19:18.217 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:18.217 "is_configured": true, 00:19:18.217 "data_offset": 0, 00:19:18.217 "data_size": 65536 00:19:18.217 }, 00:19:18.217 { 00:19:18.217 "name": "BaseBdev4", 00:19:18.217 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:18.217 "is_configured": true, 00:19:18.217 "data_offset": 0, 00:19:18.217 "data_size": 65536 00:19:18.217 } 00:19:18.217 ] 00:19:18.217 }' 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.217 [2024-11-15 10:47:48.726132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.217 [2024-11-15 10:47:48.739106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.217 10:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:18.217 [2024-11-15 10:47:48.747743] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.592 "name": "raid_bdev1", 00:19:19.592 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:19.592 "strip_size_kb": 64, 00:19:19.592 "state": "online", 00:19:19.592 "raid_level": "raid5f", 00:19:19.592 "superblock": false, 00:19:19.592 "num_base_bdevs": 4, 00:19:19.592 "num_base_bdevs_discovered": 4, 00:19:19.592 "num_base_bdevs_operational": 4, 00:19:19.592 "process": { 00:19:19.592 "type": "rebuild", 00:19:19.592 "target": "spare", 00:19:19.592 "progress": { 00:19:19.592 "blocks": 17280, 00:19:19.592 "percent": 8 00:19:19.592 } 00:19:19.592 }, 00:19:19.592 "base_bdevs_list": [ 00:19:19.592 { 00:19:19.592 "name": "spare", 00:19:19.592 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:19.592 "is_configured": true, 00:19:19.592 "data_offset": 0, 00:19:19.592 "data_size": 65536 00:19:19.592 }, 00:19:19.592 { 00:19:19.592 "name": "BaseBdev2", 00:19:19.592 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:19.592 "is_configured": true, 00:19:19.592 "data_offset": 0, 00:19:19.592 "data_size": 65536 00:19:19.592 }, 00:19:19.592 { 00:19:19.592 "name": "BaseBdev3", 00:19:19.592 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:19.592 "is_configured": true, 00:19:19.592 "data_offset": 0, 00:19:19.592 "data_size": 65536 00:19:19.592 }, 00:19:19.592 { 00:19:19.592 "name": "BaseBdev4", 00:19:19.592 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:19.592 "is_configured": true, 00:19:19.592 "data_offset": 0, 00:19:19.592 "data_size": 65536 00:19:19.592 } 00:19:19.592 ] 00:19:19.592 }' 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=663 00:19:19.592 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.593 "name": "raid_bdev1", 00:19:19.593 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:19.593 "strip_size_kb": 64, 00:19:19.593 "state": "online", 00:19:19.593 "raid_level": "raid5f", 00:19:19.593 "superblock": false, 00:19:19.593 "num_base_bdevs": 4, 00:19:19.593 "num_base_bdevs_discovered": 4, 00:19:19.593 "num_base_bdevs_operational": 4, 00:19:19.593 "process": { 00:19:19.593 "type": "rebuild", 00:19:19.593 "target": "spare", 00:19:19.593 "progress": { 00:19:19.593 "blocks": 21120, 00:19:19.593 "percent": 10 00:19:19.593 } 00:19:19.593 }, 00:19:19.593 "base_bdevs_list": [ 00:19:19.593 { 00:19:19.593 "name": "spare", 00:19:19.593 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:19.593 "is_configured": true, 00:19:19.593 "data_offset": 0, 00:19:19.593 "data_size": 65536 00:19:19.593 }, 00:19:19.593 { 00:19:19.593 "name": "BaseBdev2", 00:19:19.593 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:19.593 "is_configured": true, 00:19:19.593 "data_offset": 0, 00:19:19.593 "data_size": 65536 00:19:19.593 }, 00:19:19.593 { 00:19:19.593 "name": "BaseBdev3", 00:19:19.593 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:19.593 "is_configured": true, 00:19:19.593 "data_offset": 0, 00:19:19.593 "data_size": 65536 00:19:19.593 }, 00:19:19.593 { 00:19:19.593 "name": "BaseBdev4", 00:19:19.593 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:19.593 "is_configured": true, 00:19:19.593 "data_offset": 0, 00:19:19.593 "data_size": 65536 00:19:19.593 } 00:19:19.593 ] 00:19:19.593 }' 00:19:19.593 10:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.593 10:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.593 10:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.593 10:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.593 10:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:20.528 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:20.528 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.528 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.528 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.528 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.528 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.529 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.529 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.529 10:47:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.529 10:47:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.529 10:47:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.787 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.787 "name": "raid_bdev1", 00:19:20.787 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:20.787 "strip_size_kb": 64, 00:19:20.787 "state": "online", 00:19:20.787 "raid_level": "raid5f", 00:19:20.787 "superblock": false, 00:19:20.787 "num_base_bdevs": 4, 00:19:20.787 "num_base_bdevs_discovered": 4, 00:19:20.787 "num_base_bdevs_operational": 4, 00:19:20.787 "process": { 00:19:20.787 "type": "rebuild", 00:19:20.787 "target": "spare", 00:19:20.787 "progress": { 00:19:20.787 "blocks": 44160, 00:19:20.787 "percent": 22 00:19:20.787 } 00:19:20.787 }, 00:19:20.787 "base_bdevs_list": [ 00:19:20.787 { 00:19:20.787 "name": "spare", 00:19:20.787 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:20.787 "is_configured": true, 00:19:20.787 "data_offset": 0, 00:19:20.787 "data_size": 65536 00:19:20.787 }, 00:19:20.787 { 00:19:20.787 "name": "BaseBdev2", 00:19:20.787 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:20.787 "is_configured": true, 00:19:20.787 "data_offset": 0, 00:19:20.787 "data_size": 65536 00:19:20.787 }, 00:19:20.787 { 00:19:20.787 "name": "BaseBdev3", 00:19:20.787 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:20.787 "is_configured": true, 00:19:20.787 "data_offset": 0, 00:19:20.787 "data_size": 65536 00:19:20.787 }, 00:19:20.787 { 00:19:20.787 "name": "BaseBdev4", 00:19:20.787 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:20.787 "is_configured": true, 00:19:20.787 "data_offset": 0, 00:19:20.787 "data_size": 65536 00:19:20.787 } 00:19:20.787 ] 00:19:20.787 }' 00:19:20.787 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.787 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.787 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.787 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.787 10:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.806 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.806 "name": "raid_bdev1", 00:19:21.806 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:21.806 "strip_size_kb": 64, 00:19:21.806 "state": "online", 00:19:21.806 "raid_level": "raid5f", 00:19:21.806 "superblock": false, 00:19:21.806 "num_base_bdevs": 4, 00:19:21.806 "num_base_bdevs_discovered": 4, 00:19:21.806 "num_base_bdevs_operational": 4, 00:19:21.806 "process": { 00:19:21.806 "type": "rebuild", 00:19:21.806 "target": "spare", 00:19:21.806 "progress": { 00:19:21.806 "blocks": 65280, 00:19:21.806 "percent": 33 00:19:21.806 } 00:19:21.806 }, 00:19:21.806 "base_bdevs_list": [ 00:19:21.806 { 00:19:21.806 "name": "spare", 00:19:21.806 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:21.806 "is_configured": true, 00:19:21.806 "data_offset": 0, 00:19:21.806 "data_size": 65536 00:19:21.806 }, 00:19:21.806 { 00:19:21.806 "name": "BaseBdev2", 00:19:21.806 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:21.806 "is_configured": true, 00:19:21.806 "data_offset": 0, 00:19:21.806 "data_size": 65536 00:19:21.806 }, 00:19:21.806 { 00:19:21.806 "name": "BaseBdev3", 00:19:21.806 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:21.806 "is_configured": true, 00:19:21.806 "data_offset": 0, 00:19:21.806 "data_size": 65536 00:19:21.806 }, 00:19:21.806 { 00:19:21.806 "name": "BaseBdev4", 00:19:21.807 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:21.807 "is_configured": true, 00:19:21.807 "data_offset": 0, 00:19:21.807 "data_size": 65536 00:19:21.807 } 00:19:21.807 ] 00:19:21.807 }' 00:19:21.807 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.807 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.807 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.807 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.807 10:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.183 "name": "raid_bdev1", 00:19:23.183 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:23.183 "strip_size_kb": 64, 00:19:23.183 "state": "online", 00:19:23.183 "raid_level": "raid5f", 00:19:23.183 "superblock": false, 00:19:23.183 "num_base_bdevs": 4, 00:19:23.183 "num_base_bdevs_discovered": 4, 00:19:23.183 "num_base_bdevs_operational": 4, 00:19:23.183 "process": { 00:19:23.183 "type": "rebuild", 00:19:23.183 "target": "spare", 00:19:23.183 "progress": { 00:19:23.183 "blocks": 86400, 00:19:23.183 "percent": 43 00:19:23.183 } 00:19:23.183 }, 00:19:23.183 "base_bdevs_list": [ 00:19:23.183 { 00:19:23.183 "name": "spare", 00:19:23.183 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:23.183 "is_configured": true, 00:19:23.183 "data_offset": 0, 00:19:23.183 "data_size": 65536 00:19:23.183 }, 00:19:23.183 { 00:19:23.183 "name": "BaseBdev2", 00:19:23.183 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:23.183 "is_configured": true, 00:19:23.183 "data_offset": 0, 00:19:23.183 "data_size": 65536 00:19:23.183 }, 00:19:23.183 { 00:19:23.183 "name": "BaseBdev3", 00:19:23.183 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:23.183 "is_configured": true, 00:19:23.183 "data_offset": 0, 00:19:23.183 "data_size": 65536 00:19:23.183 }, 00:19:23.183 { 00:19:23.183 "name": "BaseBdev4", 00:19:23.183 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:23.183 "is_configured": true, 00:19:23.183 "data_offset": 0, 00:19:23.183 "data_size": 65536 00:19:23.183 } 00:19:23.183 ] 00:19:23.183 }' 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:23.183 10:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.118 "name": "raid_bdev1", 00:19:24.118 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:24.118 "strip_size_kb": 64, 00:19:24.118 "state": "online", 00:19:24.118 "raid_level": "raid5f", 00:19:24.118 "superblock": false, 00:19:24.118 "num_base_bdevs": 4, 00:19:24.118 "num_base_bdevs_discovered": 4, 00:19:24.118 "num_base_bdevs_operational": 4, 00:19:24.118 "process": { 00:19:24.118 "type": "rebuild", 00:19:24.118 "target": "spare", 00:19:24.118 "progress": { 00:19:24.118 "blocks": 109440, 00:19:24.118 "percent": 55 00:19:24.118 } 00:19:24.118 }, 00:19:24.118 "base_bdevs_list": [ 00:19:24.118 { 00:19:24.118 "name": "spare", 00:19:24.118 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:24.118 "is_configured": true, 00:19:24.118 "data_offset": 0, 00:19:24.118 "data_size": 65536 00:19:24.118 }, 00:19:24.118 { 00:19:24.118 "name": "BaseBdev2", 00:19:24.118 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:24.118 "is_configured": true, 00:19:24.118 "data_offset": 0, 00:19:24.118 "data_size": 65536 00:19:24.118 }, 00:19:24.118 { 00:19:24.118 "name": "BaseBdev3", 00:19:24.118 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:24.118 "is_configured": true, 00:19:24.118 "data_offset": 0, 00:19:24.118 "data_size": 65536 00:19:24.118 }, 00:19:24.118 { 00:19:24.118 "name": "BaseBdev4", 00:19:24.118 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:24.118 "is_configured": true, 00:19:24.118 "data_offset": 0, 00:19:24.118 "data_size": 65536 00:19:24.118 } 00:19:24.118 ] 00:19:24.118 }' 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.118 10:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.506 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.506 "name": "raid_bdev1", 00:19:25.506 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:25.506 "strip_size_kb": 64, 00:19:25.506 "state": "online", 00:19:25.506 "raid_level": "raid5f", 00:19:25.506 "superblock": false, 00:19:25.506 "num_base_bdevs": 4, 00:19:25.506 "num_base_bdevs_discovered": 4, 00:19:25.506 "num_base_bdevs_operational": 4, 00:19:25.506 "process": { 00:19:25.506 "type": "rebuild", 00:19:25.506 "target": "spare", 00:19:25.506 "progress": { 00:19:25.506 "blocks": 130560, 00:19:25.506 "percent": 66 00:19:25.506 } 00:19:25.506 }, 00:19:25.506 "base_bdevs_list": [ 00:19:25.506 { 00:19:25.506 "name": "spare", 00:19:25.506 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:25.506 "is_configured": true, 00:19:25.506 "data_offset": 0, 00:19:25.506 "data_size": 65536 00:19:25.506 }, 00:19:25.506 { 00:19:25.506 "name": "BaseBdev2", 00:19:25.506 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:25.506 "is_configured": true, 00:19:25.506 "data_offset": 0, 00:19:25.506 "data_size": 65536 00:19:25.506 }, 00:19:25.506 { 00:19:25.506 "name": "BaseBdev3", 00:19:25.506 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:25.506 "is_configured": true, 00:19:25.506 "data_offset": 0, 00:19:25.506 "data_size": 65536 00:19:25.506 }, 00:19:25.506 { 00:19:25.506 "name": "BaseBdev4", 00:19:25.506 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:25.506 "is_configured": true, 00:19:25.507 "data_offset": 0, 00:19:25.507 "data_size": 65536 00:19:25.507 } 00:19:25.507 ] 00:19:25.507 }' 00:19:25.507 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.507 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.507 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.507 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.507 10:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.441 "name": "raid_bdev1", 00:19:26.441 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:26.441 "strip_size_kb": 64, 00:19:26.441 "state": "online", 00:19:26.441 "raid_level": "raid5f", 00:19:26.441 "superblock": false, 00:19:26.441 "num_base_bdevs": 4, 00:19:26.441 "num_base_bdevs_discovered": 4, 00:19:26.441 "num_base_bdevs_operational": 4, 00:19:26.441 "process": { 00:19:26.441 "type": "rebuild", 00:19:26.441 "target": "spare", 00:19:26.441 "progress": { 00:19:26.441 "blocks": 153600, 00:19:26.441 "percent": 78 00:19:26.441 } 00:19:26.441 }, 00:19:26.441 "base_bdevs_list": [ 00:19:26.441 { 00:19:26.441 "name": "spare", 00:19:26.441 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:26.441 "is_configured": true, 00:19:26.441 "data_offset": 0, 00:19:26.441 "data_size": 65536 00:19:26.441 }, 00:19:26.441 { 00:19:26.441 "name": "BaseBdev2", 00:19:26.441 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:26.441 "is_configured": true, 00:19:26.441 "data_offset": 0, 00:19:26.441 "data_size": 65536 00:19:26.441 }, 00:19:26.441 { 00:19:26.441 "name": "BaseBdev3", 00:19:26.441 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:26.441 "is_configured": true, 00:19:26.441 "data_offset": 0, 00:19:26.441 "data_size": 65536 00:19:26.441 }, 00:19:26.441 { 00:19:26.441 "name": "BaseBdev4", 00:19:26.441 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:26.441 "is_configured": true, 00:19:26.441 "data_offset": 0, 00:19:26.441 "data_size": 65536 00:19:26.441 } 00:19:26.441 ] 00:19:26.441 }' 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:26.441 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.699 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.699 10:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:27.634 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:27.634 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.634 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.634 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.634 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.634 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.634 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.634 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.634 10:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.635 10:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.635 10:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.635 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.635 "name": "raid_bdev1", 00:19:27.635 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:27.635 "strip_size_kb": 64, 00:19:27.635 "state": "online", 00:19:27.635 "raid_level": "raid5f", 00:19:27.635 "superblock": false, 00:19:27.635 "num_base_bdevs": 4, 00:19:27.635 "num_base_bdevs_discovered": 4, 00:19:27.635 "num_base_bdevs_operational": 4, 00:19:27.635 "process": { 00:19:27.635 "type": "rebuild", 00:19:27.635 "target": "spare", 00:19:27.635 "progress": { 00:19:27.635 "blocks": 174720, 00:19:27.635 "percent": 88 00:19:27.635 } 00:19:27.635 }, 00:19:27.635 "base_bdevs_list": [ 00:19:27.635 { 00:19:27.635 "name": "spare", 00:19:27.635 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:27.635 "is_configured": true, 00:19:27.635 "data_offset": 0, 00:19:27.635 "data_size": 65536 00:19:27.635 }, 00:19:27.635 { 00:19:27.635 "name": "BaseBdev2", 00:19:27.635 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:27.635 "is_configured": true, 00:19:27.635 "data_offset": 0, 00:19:27.635 "data_size": 65536 00:19:27.635 }, 00:19:27.635 { 00:19:27.635 "name": "BaseBdev3", 00:19:27.635 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:27.635 "is_configured": true, 00:19:27.635 "data_offset": 0, 00:19:27.635 "data_size": 65536 00:19:27.635 }, 00:19:27.635 { 00:19:27.635 "name": "BaseBdev4", 00:19:27.635 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:27.635 "is_configured": true, 00:19:27.635 "data_offset": 0, 00:19:27.635 "data_size": 65536 00:19:27.635 } 00:19:27.635 ] 00:19:27.635 }' 00:19:27.635 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.635 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.635 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.635 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.635 10:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:29.010 [2024-11-15 10:47:59.151589] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:29.010 [2024-11-15 10:47:59.151732] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:29.010 [2024-11-15 10:47:59.151824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.010 "name": "raid_bdev1", 00:19:29.010 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:29.010 "strip_size_kb": 64, 00:19:29.010 "state": "online", 00:19:29.010 "raid_level": "raid5f", 00:19:29.010 "superblock": false, 00:19:29.010 "num_base_bdevs": 4, 00:19:29.010 "num_base_bdevs_discovered": 4, 00:19:29.010 "num_base_bdevs_operational": 4, 00:19:29.010 "base_bdevs_list": [ 00:19:29.010 { 00:19:29.010 "name": "spare", 00:19:29.010 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:29.010 "is_configured": true, 00:19:29.010 "data_offset": 0, 00:19:29.010 "data_size": 65536 00:19:29.010 }, 00:19:29.010 { 00:19:29.010 "name": "BaseBdev2", 00:19:29.010 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:29.010 "is_configured": true, 00:19:29.010 "data_offset": 0, 00:19:29.010 "data_size": 65536 00:19:29.010 }, 00:19:29.010 { 00:19:29.010 "name": "BaseBdev3", 00:19:29.010 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:29.010 "is_configured": true, 00:19:29.010 "data_offset": 0, 00:19:29.010 "data_size": 65536 00:19:29.010 }, 00:19:29.010 { 00:19:29.010 "name": "BaseBdev4", 00:19:29.010 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:29.010 "is_configured": true, 00:19:29.010 "data_offset": 0, 00:19:29.010 "data_size": 65536 00:19:29.010 } 00:19:29.010 ] 00:19:29.010 }' 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.010 "name": "raid_bdev1", 00:19:29.010 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:29.010 "strip_size_kb": 64, 00:19:29.010 "state": "online", 00:19:29.010 "raid_level": "raid5f", 00:19:29.010 "superblock": false, 00:19:29.010 "num_base_bdevs": 4, 00:19:29.010 "num_base_bdevs_discovered": 4, 00:19:29.010 "num_base_bdevs_operational": 4, 00:19:29.010 "base_bdevs_list": [ 00:19:29.010 { 00:19:29.010 "name": "spare", 00:19:29.010 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:29.010 "is_configured": true, 00:19:29.010 "data_offset": 0, 00:19:29.010 "data_size": 65536 00:19:29.010 }, 00:19:29.010 { 00:19:29.010 "name": "BaseBdev2", 00:19:29.010 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:29.010 "is_configured": true, 00:19:29.010 "data_offset": 0, 00:19:29.010 "data_size": 65536 00:19:29.010 }, 00:19:29.010 { 00:19:29.010 "name": "BaseBdev3", 00:19:29.010 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:29.010 "is_configured": true, 00:19:29.010 "data_offset": 0, 00:19:29.010 "data_size": 65536 00:19:29.010 }, 00:19:29.010 { 00:19:29.010 "name": "BaseBdev4", 00:19:29.010 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:29.010 "is_configured": true, 00:19:29.010 "data_offset": 0, 00:19:29.010 "data_size": 65536 00:19:29.010 } 00:19:29.010 ] 00:19:29.010 }' 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.010 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.011 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.011 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.011 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.011 10:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.011 10:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.011 10:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.269 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.269 "name": "raid_bdev1", 00:19:29.269 "uuid": "743e41e5-b727-47b5-bc1b-7df6c57d97b3", 00:19:29.269 "strip_size_kb": 64, 00:19:29.269 "state": "online", 00:19:29.269 "raid_level": "raid5f", 00:19:29.269 "superblock": false, 00:19:29.269 "num_base_bdevs": 4, 00:19:29.269 "num_base_bdevs_discovered": 4, 00:19:29.269 "num_base_bdevs_operational": 4, 00:19:29.269 "base_bdevs_list": [ 00:19:29.269 { 00:19:29.269 "name": "spare", 00:19:29.269 "uuid": "c3d94493-179a-5c50-ae37-195c0ef8f5d8", 00:19:29.269 "is_configured": true, 00:19:29.269 "data_offset": 0, 00:19:29.269 "data_size": 65536 00:19:29.269 }, 00:19:29.269 { 00:19:29.269 "name": "BaseBdev2", 00:19:29.269 "uuid": "ea50f4b9-c626-5e42-84b7-7931c452fcb0", 00:19:29.269 "is_configured": true, 00:19:29.269 "data_offset": 0, 00:19:29.269 "data_size": 65536 00:19:29.269 }, 00:19:29.269 { 00:19:29.269 "name": "BaseBdev3", 00:19:29.269 "uuid": "c4441368-7d48-555a-b097-e75dda9b0efc", 00:19:29.269 "is_configured": true, 00:19:29.269 "data_offset": 0, 00:19:29.269 "data_size": 65536 00:19:29.269 }, 00:19:29.269 { 00:19:29.269 "name": "BaseBdev4", 00:19:29.269 "uuid": "a44c0c33-cb74-5e45-a705-3559dd556c49", 00:19:29.269 "is_configured": true, 00:19:29.269 "data_offset": 0, 00:19:29.269 "data_size": 65536 00:19:29.269 } 00:19:29.269 ] 00:19:29.269 }' 00:19:29.269 10:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.269 10:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.836 [2024-11-15 10:48:00.095329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.836 [2024-11-15 10:48:00.095582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.836 [2024-11-15 10:48:00.095731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.836 [2024-11-15 10:48:00.095861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.836 [2024-11-15 10:48:00.095880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:29.836 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:30.094 /dev/nbd0 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.094 1+0 records in 00:19:30.094 1+0 records out 00:19:30.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356844 s, 11.5 MB/s 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:30.094 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:30.353 /dev/nbd1 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.353 1+0 records in 00:19:30.353 1+0 records out 00:19:30.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383306 s, 10.7 MB/s 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:30.353 10:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:30.611 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:30.611 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:30.611 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:30.611 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:30.611 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:30.611 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.611 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:30.869 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:30.869 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:30.869 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:30.869 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.869 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.869 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:30.869 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:30.869 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.869 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.869 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85166 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 85166 ']' 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 85166 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85166 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85166' 00:19:31.128 killing process with pid 85166 00:19:31.128 Received shutdown signal, test time was about 60.000000 seconds 00:19:31.128 00:19:31.128 Latency(us) 00:19:31.128 [2024-11-15T10:48:01.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.128 [2024-11-15T10:48:01.688Z] =================================================================================================================== 00:19:31.128 [2024-11-15T10:48:01.688Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 85166 00:19:31.128 10:48:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 85166 00:19:31.128 [2024-11-15 10:48:01.673728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.694 [2024-11-15 10:48:02.098506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:33.077 00:19:33.077 real 0m20.681s 00:19:33.077 user 0m25.971s 00:19:33.077 sys 0m2.288s 00:19:33.077 ************************************ 00:19:33.077 END TEST raid5f_rebuild_test 00:19:33.077 ************************************ 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.077 10:48:03 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:19:33.077 10:48:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:33.077 10:48:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:33.077 10:48:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.077 ************************************ 00:19:33.077 START TEST raid5f_rebuild_test_sb 00:19:33.077 ************************************ 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:33.077 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85681 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85681 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85681 ']' 00:19:33.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.078 10:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.078 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:33.078 Zero copy mechanism will not be used. 00:19:33.078 [2024-11-15 10:48:03.403316] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:19:33.078 [2024-11-15 10:48:03.403562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85681 ] 00:19:33.078 [2024-11-15 10:48:03.600477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.336 [2024-11-15 10:48:03.728944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.594 [2024-11-15 10:48:03.913514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.594 [2024-11-15 10:48:03.913572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.160 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:34.160 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:34.160 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:34.160 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:34.160 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.160 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.160 BaseBdev1_malloc 00:19:34.160 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.161 [2024-11-15 10:48:04.574483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:34.161 [2024-11-15 10:48:04.574866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.161 [2024-11-15 10:48:04.574933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:34.161 [2024-11-15 10:48:04.574988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.161 [2024-11-15 10:48:04.578617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.161 [2024-11-15 10:48:04.578692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:34.161 BaseBdev1 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.161 BaseBdev2_malloc 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.161 [2024-11-15 10:48:04.636237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:34.161 [2024-11-15 10:48:04.636472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.161 [2024-11-15 10:48:04.636681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:34.161 [2024-11-15 10:48:04.636813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.161 [2024-11-15 10:48:04.640173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.161 [2024-11-15 10:48:04.640434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:34.161 BaseBdev2 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.161 BaseBdev3_malloc 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.161 [2024-11-15 10:48:04.697679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:34.161 [2024-11-15 10:48:04.697988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.161 [2024-11-15 10:48:04.698073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:34.161 [2024-11-15 10:48:04.698200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.161 [2024-11-15 10:48:04.701113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.161 [2024-11-15 10:48:04.701172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:34.161 BaseBdev3 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.161 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.419 BaseBdev4_malloc 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.419 [2024-11-15 10:48:04.746755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:34.419 [2024-11-15 10:48:04.747080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.419 [2024-11-15 10:48:04.747239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:34.419 [2024-11-15 10:48:04.747430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.419 [2024-11-15 10:48:04.750316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.419 [2024-11-15 10:48:04.750560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:34.419 BaseBdev4 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.419 spare_malloc 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.419 spare_delay 00:19:34.419 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.420 [2024-11-15 10:48:04.811870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:34.420 [2024-11-15 10:48:04.811965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.420 [2024-11-15 10:48:04.812001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:34.420 [2024-11-15 10:48:04.812019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.420 [2024-11-15 10:48:04.814815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.420 [2024-11-15 10:48:04.814872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:34.420 spare 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.420 [2024-11-15 10:48:04.819973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:34.420 [2024-11-15 10:48:04.822522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:34.420 [2024-11-15 10:48:04.822622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:34.420 [2024-11-15 10:48:04.822709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:34.420 [2024-11-15 10:48:04.823012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:34.420 [2024-11-15 10:48:04.823038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:34.420 [2024-11-15 10:48:04.823431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:34.420 [2024-11-15 10:48:04.830278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:34.420 [2024-11-15 10:48:04.830531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:34.420 [2024-11-15 10:48:04.830895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.420 "name": "raid_bdev1", 00:19:34.420 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:34.420 "strip_size_kb": 64, 00:19:34.420 "state": "online", 00:19:34.420 "raid_level": "raid5f", 00:19:34.420 "superblock": true, 00:19:34.420 "num_base_bdevs": 4, 00:19:34.420 "num_base_bdevs_discovered": 4, 00:19:34.420 "num_base_bdevs_operational": 4, 00:19:34.420 "base_bdevs_list": [ 00:19:34.420 { 00:19:34.420 "name": "BaseBdev1", 00:19:34.420 "uuid": "f6db07ae-9a40-5734-bca7-9012a3b191a2", 00:19:34.420 "is_configured": true, 00:19:34.420 "data_offset": 2048, 00:19:34.420 "data_size": 63488 00:19:34.420 }, 00:19:34.420 { 00:19:34.420 "name": "BaseBdev2", 00:19:34.420 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:34.420 "is_configured": true, 00:19:34.420 "data_offset": 2048, 00:19:34.420 "data_size": 63488 00:19:34.420 }, 00:19:34.420 { 00:19:34.420 "name": "BaseBdev3", 00:19:34.420 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:34.420 "is_configured": true, 00:19:34.420 "data_offset": 2048, 00:19:34.420 "data_size": 63488 00:19:34.420 }, 00:19:34.420 { 00:19:34.420 "name": "BaseBdev4", 00:19:34.420 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:34.420 "is_configured": true, 00:19:34.420 "data_offset": 2048, 00:19:34.420 "data_size": 63488 00:19:34.420 } 00:19:34.420 ] 00:19:34.420 }' 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.420 10:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:34.986 [2024-11-15 10:48:05.406520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:34.986 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:35.552 [2024-11-15 10:48:05.834432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:35.552 /dev/nbd0 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:35.552 1+0 records in 00:19:35.552 1+0 records out 00:19:35.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406694 s, 10.1 MB/s 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:35.552 10:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:19:36.488 496+0 records in 00:19:36.488 496+0 records out 00:19:36.488 97517568 bytes (98 MB, 93 MiB) copied, 0.776327 s, 126 MB/s 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:36.488 [2024-11-15 10:48:06.968883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.488 [2024-11-15 10:48:06.981844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.488 10:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.488 10:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.746 10:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.746 "name": "raid_bdev1", 00:19:36.746 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:36.746 "strip_size_kb": 64, 00:19:36.746 "state": "online", 00:19:36.746 "raid_level": "raid5f", 00:19:36.746 "superblock": true, 00:19:36.746 "num_base_bdevs": 4, 00:19:36.746 "num_base_bdevs_discovered": 3, 00:19:36.746 "num_base_bdevs_operational": 3, 00:19:36.746 "base_bdevs_list": [ 00:19:36.746 { 00:19:36.746 "name": null, 00:19:36.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.746 "is_configured": false, 00:19:36.746 "data_offset": 0, 00:19:36.746 "data_size": 63488 00:19:36.746 }, 00:19:36.746 { 00:19:36.746 "name": "BaseBdev2", 00:19:36.746 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:36.746 "is_configured": true, 00:19:36.746 "data_offset": 2048, 00:19:36.746 "data_size": 63488 00:19:36.746 }, 00:19:36.746 { 00:19:36.746 "name": "BaseBdev3", 00:19:36.746 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:36.746 "is_configured": true, 00:19:36.746 "data_offset": 2048, 00:19:36.746 "data_size": 63488 00:19:36.746 }, 00:19:36.746 { 00:19:36.746 "name": "BaseBdev4", 00:19:36.746 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:36.746 "is_configured": true, 00:19:36.746 "data_offset": 2048, 00:19:36.746 "data_size": 63488 00:19:36.747 } 00:19:36.747 ] 00:19:36.747 }' 00:19:36.747 10:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.747 10:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.005 10:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:37.005 10:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.005 10:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.005 [2024-11-15 10:48:07.526037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:37.005 [2024-11-15 10:48:07.540134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:19:37.005 10:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.005 10:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:37.005 [2024-11-15 10:48:07.549054] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.412 "name": "raid_bdev1", 00:19:38.412 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:38.412 "strip_size_kb": 64, 00:19:38.412 "state": "online", 00:19:38.412 "raid_level": "raid5f", 00:19:38.412 "superblock": true, 00:19:38.412 "num_base_bdevs": 4, 00:19:38.412 "num_base_bdevs_discovered": 4, 00:19:38.412 "num_base_bdevs_operational": 4, 00:19:38.412 "process": { 00:19:38.412 "type": "rebuild", 00:19:38.412 "target": "spare", 00:19:38.412 "progress": { 00:19:38.412 "blocks": 17280, 00:19:38.412 "percent": 9 00:19:38.412 } 00:19:38.412 }, 00:19:38.412 "base_bdevs_list": [ 00:19:38.412 { 00:19:38.412 "name": "spare", 00:19:38.412 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:38.412 "is_configured": true, 00:19:38.412 "data_offset": 2048, 00:19:38.412 "data_size": 63488 00:19:38.412 }, 00:19:38.412 { 00:19:38.412 "name": "BaseBdev2", 00:19:38.412 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:38.412 "is_configured": true, 00:19:38.412 "data_offset": 2048, 00:19:38.412 "data_size": 63488 00:19:38.412 }, 00:19:38.412 { 00:19:38.412 "name": "BaseBdev3", 00:19:38.412 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:38.412 "is_configured": true, 00:19:38.412 "data_offset": 2048, 00:19:38.412 "data_size": 63488 00:19:38.412 }, 00:19:38.412 { 00:19:38.412 "name": "BaseBdev4", 00:19:38.412 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:38.412 "is_configured": true, 00:19:38.412 "data_offset": 2048, 00:19:38.412 "data_size": 63488 00:19:38.412 } 00:19:38.412 ] 00:19:38.412 }' 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.412 [2024-11-15 10:48:08.714604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.412 [2024-11-15 10:48:08.761587] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:38.412 [2024-11-15 10:48:08.761716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.412 [2024-11-15 10:48:08.761744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.412 [2024-11-15 10:48:08.761760] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.412 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.412 "name": "raid_bdev1", 00:19:38.412 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:38.412 "strip_size_kb": 64, 00:19:38.412 "state": "online", 00:19:38.412 "raid_level": "raid5f", 00:19:38.412 "superblock": true, 00:19:38.412 "num_base_bdevs": 4, 00:19:38.412 "num_base_bdevs_discovered": 3, 00:19:38.412 "num_base_bdevs_operational": 3, 00:19:38.412 "base_bdevs_list": [ 00:19:38.412 { 00:19:38.412 "name": null, 00:19:38.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.412 "is_configured": false, 00:19:38.412 "data_offset": 0, 00:19:38.412 "data_size": 63488 00:19:38.412 }, 00:19:38.412 { 00:19:38.412 "name": "BaseBdev2", 00:19:38.412 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:38.412 "is_configured": true, 00:19:38.412 "data_offset": 2048, 00:19:38.412 "data_size": 63488 00:19:38.412 }, 00:19:38.412 { 00:19:38.412 "name": "BaseBdev3", 00:19:38.412 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:38.412 "is_configured": true, 00:19:38.412 "data_offset": 2048, 00:19:38.412 "data_size": 63488 00:19:38.412 }, 00:19:38.412 { 00:19:38.412 "name": "BaseBdev4", 00:19:38.412 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:38.412 "is_configured": true, 00:19:38.412 "data_offset": 2048, 00:19:38.413 "data_size": 63488 00:19:38.413 } 00:19:38.413 ] 00:19:38.413 }' 00:19:38.413 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.413 10:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.979 "name": "raid_bdev1", 00:19:38.979 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:38.979 "strip_size_kb": 64, 00:19:38.979 "state": "online", 00:19:38.979 "raid_level": "raid5f", 00:19:38.979 "superblock": true, 00:19:38.979 "num_base_bdevs": 4, 00:19:38.979 "num_base_bdevs_discovered": 3, 00:19:38.979 "num_base_bdevs_operational": 3, 00:19:38.979 "base_bdevs_list": [ 00:19:38.979 { 00:19:38.979 "name": null, 00:19:38.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.979 "is_configured": false, 00:19:38.979 "data_offset": 0, 00:19:38.979 "data_size": 63488 00:19:38.979 }, 00:19:38.979 { 00:19:38.979 "name": "BaseBdev2", 00:19:38.979 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:38.979 "is_configured": true, 00:19:38.979 "data_offset": 2048, 00:19:38.979 "data_size": 63488 00:19:38.979 }, 00:19:38.979 { 00:19:38.979 "name": "BaseBdev3", 00:19:38.979 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:38.979 "is_configured": true, 00:19:38.979 "data_offset": 2048, 00:19:38.979 "data_size": 63488 00:19:38.979 }, 00:19:38.979 { 00:19:38.979 "name": "BaseBdev4", 00:19:38.979 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:38.979 "is_configured": true, 00:19:38.979 "data_offset": 2048, 00:19:38.979 "data_size": 63488 00:19:38.979 } 00:19:38.979 ] 00:19:38.979 }' 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.979 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.237 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:39.237 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:39.237 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.237 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.237 [2024-11-15 10:48:09.551456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:39.237 [2024-11-15 10:48:09.564580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:19:39.237 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.237 10:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:39.237 [2024-11-15 10:48:09.573373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.172 "name": "raid_bdev1", 00:19:40.172 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:40.172 "strip_size_kb": 64, 00:19:40.172 "state": "online", 00:19:40.172 "raid_level": "raid5f", 00:19:40.172 "superblock": true, 00:19:40.172 "num_base_bdevs": 4, 00:19:40.172 "num_base_bdevs_discovered": 4, 00:19:40.172 "num_base_bdevs_operational": 4, 00:19:40.172 "process": { 00:19:40.172 "type": "rebuild", 00:19:40.172 "target": "spare", 00:19:40.172 "progress": { 00:19:40.172 "blocks": 17280, 00:19:40.172 "percent": 9 00:19:40.172 } 00:19:40.172 }, 00:19:40.172 "base_bdevs_list": [ 00:19:40.172 { 00:19:40.172 "name": "spare", 00:19:40.172 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:40.172 "is_configured": true, 00:19:40.172 "data_offset": 2048, 00:19:40.172 "data_size": 63488 00:19:40.172 }, 00:19:40.172 { 00:19:40.172 "name": "BaseBdev2", 00:19:40.172 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:40.172 "is_configured": true, 00:19:40.172 "data_offset": 2048, 00:19:40.172 "data_size": 63488 00:19:40.172 }, 00:19:40.172 { 00:19:40.172 "name": "BaseBdev3", 00:19:40.172 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:40.172 "is_configured": true, 00:19:40.172 "data_offset": 2048, 00:19:40.172 "data_size": 63488 00:19:40.172 }, 00:19:40.172 { 00:19:40.172 "name": "BaseBdev4", 00:19:40.172 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:40.172 "is_configured": true, 00:19:40.172 "data_offset": 2048, 00:19:40.172 "data_size": 63488 00:19:40.172 } 00:19:40.172 ] 00:19:40.172 }' 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:40.172 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:40.172 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=684 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.173 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.431 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.431 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.431 "name": "raid_bdev1", 00:19:40.431 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:40.431 "strip_size_kb": 64, 00:19:40.431 "state": "online", 00:19:40.431 "raid_level": "raid5f", 00:19:40.431 "superblock": true, 00:19:40.431 "num_base_bdevs": 4, 00:19:40.431 "num_base_bdevs_discovered": 4, 00:19:40.431 "num_base_bdevs_operational": 4, 00:19:40.431 "process": { 00:19:40.431 "type": "rebuild", 00:19:40.431 "target": "spare", 00:19:40.431 "progress": { 00:19:40.431 "blocks": 21120, 00:19:40.431 "percent": 11 00:19:40.431 } 00:19:40.431 }, 00:19:40.431 "base_bdevs_list": [ 00:19:40.431 { 00:19:40.431 "name": "spare", 00:19:40.431 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:40.431 "is_configured": true, 00:19:40.431 "data_offset": 2048, 00:19:40.431 "data_size": 63488 00:19:40.431 }, 00:19:40.431 { 00:19:40.431 "name": "BaseBdev2", 00:19:40.431 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:40.431 "is_configured": true, 00:19:40.431 "data_offset": 2048, 00:19:40.431 "data_size": 63488 00:19:40.431 }, 00:19:40.431 { 00:19:40.431 "name": "BaseBdev3", 00:19:40.431 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:40.431 "is_configured": true, 00:19:40.431 "data_offset": 2048, 00:19:40.431 "data_size": 63488 00:19:40.431 }, 00:19:40.431 { 00:19:40.431 "name": "BaseBdev4", 00:19:40.431 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:40.431 "is_configured": true, 00:19:40.431 "data_offset": 2048, 00:19:40.431 "data_size": 63488 00:19:40.431 } 00:19:40.431 ] 00:19:40.431 }' 00:19:40.431 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.431 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.431 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.431 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.431 10:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.365 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.628 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.628 "name": "raid_bdev1", 00:19:41.628 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:41.628 "strip_size_kb": 64, 00:19:41.628 "state": "online", 00:19:41.628 "raid_level": "raid5f", 00:19:41.628 "superblock": true, 00:19:41.628 "num_base_bdevs": 4, 00:19:41.628 "num_base_bdevs_discovered": 4, 00:19:41.628 "num_base_bdevs_operational": 4, 00:19:41.628 "process": { 00:19:41.628 "type": "rebuild", 00:19:41.628 "target": "spare", 00:19:41.628 "progress": { 00:19:41.628 "blocks": 44160, 00:19:41.628 "percent": 23 00:19:41.628 } 00:19:41.628 }, 00:19:41.628 "base_bdevs_list": [ 00:19:41.628 { 00:19:41.628 "name": "spare", 00:19:41.628 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:41.628 "is_configured": true, 00:19:41.628 "data_offset": 2048, 00:19:41.628 "data_size": 63488 00:19:41.628 }, 00:19:41.628 { 00:19:41.628 "name": "BaseBdev2", 00:19:41.628 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:41.628 "is_configured": true, 00:19:41.628 "data_offset": 2048, 00:19:41.628 "data_size": 63488 00:19:41.628 }, 00:19:41.628 { 00:19:41.628 "name": "BaseBdev3", 00:19:41.628 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:41.628 "is_configured": true, 00:19:41.628 "data_offset": 2048, 00:19:41.628 "data_size": 63488 00:19:41.628 }, 00:19:41.628 { 00:19:41.628 "name": "BaseBdev4", 00:19:41.628 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:41.628 "is_configured": true, 00:19:41.628 "data_offset": 2048, 00:19:41.628 "data_size": 63488 00:19:41.628 } 00:19:41.628 ] 00:19:41.628 }' 00:19:41.628 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.628 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:41.628 10:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.628 10:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.628 10:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.578 "name": "raid_bdev1", 00:19:42.578 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:42.578 "strip_size_kb": 64, 00:19:42.578 "state": "online", 00:19:42.578 "raid_level": "raid5f", 00:19:42.578 "superblock": true, 00:19:42.578 "num_base_bdevs": 4, 00:19:42.578 "num_base_bdevs_discovered": 4, 00:19:42.578 "num_base_bdevs_operational": 4, 00:19:42.578 "process": { 00:19:42.578 "type": "rebuild", 00:19:42.578 "target": "spare", 00:19:42.578 "progress": { 00:19:42.578 "blocks": 65280, 00:19:42.578 "percent": 34 00:19:42.578 } 00:19:42.578 }, 00:19:42.578 "base_bdevs_list": [ 00:19:42.578 { 00:19:42.578 "name": "spare", 00:19:42.578 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:42.578 "is_configured": true, 00:19:42.578 "data_offset": 2048, 00:19:42.578 "data_size": 63488 00:19:42.578 }, 00:19:42.578 { 00:19:42.578 "name": "BaseBdev2", 00:19:42.578 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:42.578 "is_configured": true, 00:19:42.578 "data_offset": 2048, 00:19:42.578 "data_size": 63488 00:19:42.578 }, 00:19:42.578 { 00:19:42.578 "name": "BaseBdev3", 00:19:42.578 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:42.578 "is_configured": true, 00:19:42.578 "data_offset": 2048, 00:19:42.578 "data_size": 63488 00:19:42.578 }, 00:19:42.578 { 00:19:42.578 "name": "BaseBdev4", 00:19:42.578 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:42.578 "is_configured": true, 00:19:42.578 "data_offset": 2048, 00:19:42.578 "data_size": 63488 00:19:42.578 } 00:19:42.578 ] 00:19:42.578 }' 00:19:42.578 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.836 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.836 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.836 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.836 10:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.772 "name": "raid_bdev1", 00:19:43.772 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:43.772 "strip_size_kb": 64, 00:19:43.772 "state": "online", 00:19:43.772 "raid_level": "raid5f", 00:19:43.772 "superblock": true, 00:19:43.772 "num_base_bdevs": 4, 00:19:43.772 "num_base_bdevs_discovered": 4, 00:19:43.772 "num_base_bdevs_operational": 4, 00:19:43.772 "process": { 00:19:43.772 "type": "rebuild", 00:19:43.772 "target": "spare", 00:19:43.772 "progress": { 00:19:43.772 "blocks": 88320, 00:19:43.772 "percent": 46 00:19:43.772 } 00:19:43.772 }, 00:19:43.772 "base_bdevs_list": [ 00:19:43.772 { 00:19:43.772 "name": "spare", 00:19:43.772 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:43.772 "is_configured": true, 00:19:43.772 "data_offset": 2048, 00:19:43.772 "data_size": 63488 00:19:43.772 }, 00:19:43.772 { 00:19:43.772 "name": "BaseBdev2", 00:19:43.772 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:43.772 "is_configured": true, 00:19:43.772 "data_offset": 2048, 00:19:43.772 "data_size": 63488 00:19:43.772 }, 00:19:43.772 { 00:19:43.772 "name": "BaseBdev3", 00:19:43.772 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:43.772 "is_configured": true, 00:19:43.772 "data_offset": 2048, 00:19:43.772 "data_size": 63488 00:19:43.772 }, 00:19:43.772 { 00:19:43.772 "name": "BaseBdev4", 00:19:43.772 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:43.772 "is_configured": true, 00:19:43.772 "data_offset": 2048, 00:19:43.772 "data_size": 63488 00:19:43.772 } 00:19:43.772 ] 00:19:43.772 }' 00:19:43.772 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.030 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.030 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.030 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.030 10:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:44.963 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.964 "name": "raid_bdev1", 00:19:44.964 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:44.964 "strip_size_kb": 64, 00:19:44.964 "state": "online", 00:19:44.964 "raid_level": "raid5f", 00:19:44.964 "superblock": true, 00:19:44.964 "num_base_bdevs": 4, 00:19:44.964 "num_base_bdevs_discovered": 4, 00:19:44.964 "num_base_bdevs_operational": 4, 00:19:44.964 "process": { 00:19:44.964 "type": "rebuild", 00:19:44.964 "target": "spare", 00:19:44.964 "progress": { 00:19:44.964 "blocks": 109440, 00:19:44.964 "percent": 57 00:19:44.964 } 00:19:44.964 }, 00:19:44.964 "base_bdevs_list": [ 00:19:44.964 { 00:19:44.964 "name": "spare", 00:19:44.964 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:44.964 "is_configured": true, 00:19:44.964 "data_offset": 2048, 00:19:44.964 "data_size": 63488 00:19:44.964 }, 00:19:44.964 { 00:19:44.964 "name": "BaseBdev2", 00:19:44.964 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:44.964 "is_configured": true, 00:19:44.964 "data_offset": 2048, 00:19:44.964 "data_size": 63488 00:19:44.964 }, 00:19:44.964 { 00:19:44.964 "name": "BaseBdev3", 00:19:44.964 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:44.964 "is_configured": true, 00:19:44.964 "data_offset": 2048, 00:19:44.964 "data_size": 63488 00:19:44.964 }, 00:19:44.964 { 00:19:44.964 "name": "BaseBdev4", 00:19:44.964 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:44.964 "is_configured": true, 00:19:44.964 "data_offset": 2048, 00:19:44.964 "data_size": 63488 00:19:44.964 } 00:19:44.964 ] 00:19:44.964 }' 00:19:44.964 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.222 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.222 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.222 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.222 10:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.157 "name": "raid_bdev1", 00:19:46.157 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:46.157 "strip_size_kb": 64, 00:19:46.157 "state": "online", 00:19:46.157 "raid_level": "raid5f", 00:19:46.157 "superblock": true, 00:19:46.157 "num_base_bdevs": 4, 00:19:46.157 "num_base_bdevs_discovered": 4, 00:19:46.157 "num_base_bdevs_operational": 4, 00:19:46.157 "process": { 00:19:46.157 "type": "rebuild", 00:19:46.157 "target": "spare", 00:19:46.157 "progress": { 00:19:46.157 "blocks": 132480, 00:19:46.157 "percent": 69 00:19:46.157 } 00:19:46.157 }, 00:19:46.157 "base_bdevs_list": [ 00:19:46.157 { 00:19:46.157 "name": "spare", 00:19:46.157 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:46.157 "is_configured": true, 00:19:46.157 "data_offset": 2048, 00:19:46.157 "data_size": 63488 00:19:46.157 }, 00:19:46.157 { 00:19:46.157 "name": "BaseBdev2", 00:19:46.157 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:46.157 "is_configured": true, 00:19:46.157 "data_offset": 2048, 00:19:46.157 "data_size": 63488 00:19:46.157 }, 00:19:46.157 { 00:19:46.157 "name": "BaseBdev3", 00:19:46.157 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:46.157 "is_configured": true, 00:19:46.157 "data_offset": 2048, 00:19:46.157 "data_size": 63488 00:19:46.157 }, 00:19:46.157 { 00:19:46.157 "name": "BaseBdev4", 00:19:46.157 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:46.157 "is_configured": true, 00:19:46.157 "data_offset": 2048, 00:19:46.157 "data_size": 63488 00:19:46.157 } 00:19:46.157 ] 00:19:46.157 }' 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.157 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.415 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.415 10:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.348 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.348 "name": "raid_bdev1", 00:19:47.348 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:47.348 "strip_size_kb": 64, 00:19:47.348 "state": "online", 00:19:47.348 "raid_level": "raid5f", 00:19:47.348 "superblock": true, 00:19:47.348 "num_base_bdevs": 4, 00:19:47.348 "num_base_bdevs_discovered": 4, 00:19:47.348 "num_base_bdevs_operational": 4, 00:19:47.348 "process": { 00:19:47.348 "type": "rebuild", 00:19:47.348 "target": "spare", 00:19:47.348 "progress": { 00:19:47.348 "blocks": 155520, 00:19:47.348 "percent": 81 00:19:47.348 } 00:19:47.348 }, 00:19:47.349 "base_bdevs_list": [ 00:19:47.349 { 00:19:47.349 "name": "spare", 00:19:47.349 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:47.349 "is_configured": true, 00:19:47.349 "data_offset": 2048, 00:19:47.349 "data_size": 63488 00:19:47.349 }, 00:19:47.349 { 00:19:47.349 "name": "BaseBdev2", 00:19:47.349 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:47.349 "is_configured": true, 00:19:47.349 "data_offset": 2048, 00:19:47.349 "data_size": 63488 00:19:47.349 }, 00:19:47.349 { 00:19:47.349 "name": "BaseBdev3", 00:19:47.349 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:47.349 "is_configured": true, 00:19:47.349 "data_offset": 2048, 00:19:47.349 "data_size": 63488 00:19:47.349 }, 00:19:47.349 { 00:19:47.349 "name": "BaseBdev4", 00:19:47.349 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:47.349 "is_configured": true, 00:19:47.349 "data_offset": 2048, 00:19:47.349 "data_size": 63488 00:19:47.349 } 00:19:47.349 ] 00:19:47.349 }' 00:19:47.349 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.349 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.349 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.607 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.607 10:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.540 "name": "raid_bdev1", 00:19:48.540 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:48.540 "strip_size_kb": 64, 00:19:48.540 "state": "online", 00:19:48.540 "raid_level": "raid5f", 00:19:48.540 "superblock": true, 00:19:48.540 "num_base_bdevs": 4, 00:19:48.540 "num_base_bdevs_discovered": 4, 00:19:48.540 "num_base_bdevs_operational": 4, 00:19:48.540 "process": { 00:19:48.540 "type": "rebuild", 00:19:48.540 "target": "spare", 00:19:48.540 "progress": { 00:19:48.540 "blocks": 176640, 00:19:48.540 "percent": 92 00:19:48.540 } 00:19:48.540 }, 00:19:48.540 "base_bdevs_list": [ 00:19:48.540 { 00:19:48.540 "name": "spare", 00:19:48.540 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:48.540 "is_configured": true, 00:19:48.540 "data_offset": 2048, 00:19:48.540 "data_size": 63488 00:19:48.540 }, 00:19:48.540 { 00:19:48.540 "name": "BaseBdev2", 00:19:48.540 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:48.540 "is_configured": true, 00:19:48.540 "data_offset": 2048, 00:19:48.540 "data_size": 63488 00:19:48.540 }, 00:19:48.540 { 00:19:48.540 "name": "BaseBdev3", 00:19:48.540 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:48.540 "is_configured": true, 00:19:48.540 "data_offset": 2048, 00:19:48.540 "data_size": 63488 00:19:48.540 }, 00:19:48.540 { 00:19:48.540 "name": "BaseBdev4", 00:19:48.540 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:48.540 "is_configured": true, 00:19:48.540 "data_offset": 2048, 00:19:48.540 "data_size": 63488 00:19:48.540 } 00:19:48.540 ] 00:19:48.540 }' 00:19:48.540 10:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.540 10:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.540 10:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.799 10:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.799 10:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:49.365 [2024-11-15 10:48:19.681562] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:49.365 [2024-11-15 10:48:19.681757] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:49.365 [2024-11-15 10:48:19.682069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.623 "name": "raid_bdev1", 00:19:49.623 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:49.623 "strip_size_kb": 64, 00:19:49.623 "state": "online", 00:19:49.623 "raid_level": "raid5f", 00:19:49.623 "superblock": true, 00:19:49.623 "num_base_bdevs": 4, 00:19:49.623 "num_base_bdevs_discovered": 4, 00:19:49.623 "num_base_bdevs_operational": 4, 00:19:49.623 "base_bdevs_list": [ 00:19:49.623 { 00:19:49.623 "name": "spare", 00:19:49.623 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:49.623 "is_configured": true, 00:19:49.623 "data_offset": 2048, 00:19:49.623 "data_size": 63488 00:19:49.623 }, 00:19:49.623 { 00:19:49.623 "name": "BaseBdev2", 00:19:49.623 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:49.623 "is_configured": true, 00:19:49.623 "data_offset": 2048, 00:19:49.623 "data_size": 63488 00:19:49.623 }, 00:19:49.623 { 00:19:49.623 "name": "BaseBdev3", 00:19:49.623 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:49.623 "is_configured": true, 00:19:49.623 "data_offset": 2048, 00:19:49.623 "data_size": 63488 00:19:49.623 }, 00:19:49.623 { 00:19:49.623 "name": "BaseBdev4", 00:19:49.623 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:49.623 "is_configured": true, 00:19:49.623 "data_offset": 2048, 00:19:49.623 "data_size": 63488 00:19:49.623 } 00:19:49.623 ] 00:19:49.623 }' 00:19:49.623 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.882 "name": "raid_bdev1", 00:19:49.882 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:49.882 "strip_size_kb": 64, 00:19:49.882 "state": "online", 00:19:49.882 "raid_level": "raid5f", 00:19:49.882 "superblock": true, 00:19:49.882 "num_base_bdevs": 4, 00:19:49.882 "num_base_bdevs_discovered": 4, 00:19:49.882 "num_base_bdevs_operational": 4, 00:19:49.882 "base_bdevs_list": [ 00:19:49.882 { 00:19:49.882 "name": "spare", 00:19:49.882 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:49.882 "is_configured": true, 00:19:49.882 "data_offset": 2048, 00:19:49.882 "data_size": 63488 00:19:49.882 }, 00:19:49.882 { 00:19:49.882 "name": "BaseBdev2", 00:19:49.882 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:49.882 "is_configured": true, 00:19:49.882 "data_offset": 2048, 00:19:49.882 "data_size": 63488 00:19:49.882 }, 00:19:49.882 { 00:19:49.882 "name": "BaseBdev3", 00:19:49.882 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:49.882 "is_configured": true, 00:19:49.882 "data_offset": 2048, 00:19:49.882 "data_size": 63488 00:19:49.882 }, 00:19:49.882 { 00:19:49.882 "name": "BaseBdev4", 00:19:49.882 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:49.882 "is_configured": true, 00:19:49.882 "data_offset": 2048, 00:19:49.882 "data_size": 63488 00:19:49.882 } 00:19:49.882 ] 00:19:49.882 }' 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:49.882 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.141 "name": "raid_bdev1", 00:19:50.141 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:50.141 "strip_size_kb": 64, 00:19:50.141 "state": "online", 00:19:50.141 "raid_level": "raid5f", 00:19:50.141 "superblock": true, 00:19:50.141 "num_base_bdevs": 4, 00:19:50.141 "num_base_bdevs_discovered": 4, 00:19:50.141 "num_base_bdevs_operational": 4, 00:19:50.141 "base_bdevs_list": [ 00:19:50.141 { 00:19:50.141 "name": "spare", 00:19:50.141 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:50.141 "is_configured": true, 00:19:50.141 "data_offset": 2048, 00:19:50.141 "data_size": 63488 00:19:50.141 }, 00:19:50.141 { 00:19:50.141 "name": "BaseBdev2", 00:19:50.141 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:50.141 "is_configured": true, 00:19:50.141 "data_offset": 2048, 00:19:50.141 "data_size": 63488 00:19:50.141 }, 00:19:50.141 { 00:19:50.141 "name": "BaseBdev3", 00:19:50.141 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:50.141 "is_configured": true, 00:19:50.141 "data_offset": 2048, 00:19:50.141 "data_size": 63488 00:19:50.141 }, 00:19:50.141 { 00:19:50.141 "name": "BaseBdev4", 00:19:50.141 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:50.141 "is_configured": true, 00:19:50.141 "data_offset": 2048, 00:19:50.141 "data_size": 63488 00:19:50.141 } 00:19:50.141 ] 00:19:50.141 }' 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.141 10:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.708 [2024-11-15 10:48:21.018805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.708 [2024-11-15 10:48:21.018861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.708 [2024-11-15 10:48:21.019007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.708 [2024-11-15 10:48:21.019133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.708 [2024-11-15 10:48:21.019165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:50.708 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:50.967 /dev/nbd0 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:50.967 1+0 records in 00:19:50.967 1+0 records out 00:19:50.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381337 s, 10.7 MB/s 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:50.967 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.968 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:50.968 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:50.968 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:50.968 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:50.968 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:51.534 /dev/nbd1 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.534 1+0 records in 00:19:51.534 1+0 records out 00:19:51.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551475 s, 7.4 MB/s 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:51.534 10:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:51.534 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:51.534 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:51.534 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:51.534 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.534 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:51.534 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.534 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:52.101 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.360 [2024-11-15 10:48:22.676172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:52.360 [2024-11-15 10:48:22.676245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.360 [2024-11-15 10:48:22.676280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:52.360 [2024-11-15 10:48:22.676295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.360 [2024-11-15 10:48:22.679160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.360 [2024-11-15 10:48:22.679205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:52.360 [2024-11-15 10:48:22.679369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:52.360 [2024-11-15 10:48:22.679463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.360 [2024-11-15 10:48:22.679652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.360 [2024-11-15 10:48:22.679788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:52.360 [2024-11-15 10:48:22.679909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:52.360 spare 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.360 [2024-11-15 10:48:22.780059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:52.360 [2024-11-15 10:48:22.780147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:52.360 [2024-11-15 10:48:22.780722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:19:52.360 [2024-11-15 10:48:22.787240] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:52.360 [2024-11-15 10:48:22.787283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:52.360 [2024-11-15 10:48:22.787720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.360 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.360 "name": "raid_bdev1", 00:19:52.360 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:52.360 "strip_size_kb": 64, 00:19:52.360 "state": "online", 00:19:52.360 "raid_level": "raid5f", 00:19:52.360 "superblock": true, 00:19:52.360 "num_base_bdevs": 4, 00:19:52.360 "num_base_bdevs_discovered": 4, 00:19:52.360 "num_base_bdevs_operational": 4, 00:19:52.360 "base_bdevs_list": [ 00:19:52.360 { 00:19:52.360 "name": "spare", 00:19:52.360 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:52.360 "is_configured": true, 00:19:52.360 "data_offset": 2048, 00:19:52.360 "data_size": 63488 00:19:52.360 }, 00:19:52.360 { 00:19:52.360 "name": "BaseBdev2", 00:19:52.360 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:52.360 "is_configured": true, 00:19:52.360 "data_offset": 2048, 00:19:52.360 "data_size": 63488 00:19:52.360 }, 00:19:52.360 { 00:19:52.360 "name": "BaseBdev3", 00:19:52.360 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:52.360 "is_configured": true, 00:19:52.360 "data_offset": 2048, 00:19:52.360 "data_size": 63488 00:19:52.360 }, 00:19:52.360 { 00:19:52.360 "name": "BaseBdev4", 00:19:52.360 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:52.360 "is_configured": true, 00:19:52.360 "data_offset": 2048, 00:19:52.361 "data_size": 63488 00:19:52.361 } 00:19:52.361 ] 00:19:52.361 }' 00:19:52.361 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.361 10:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.927 "name": "raid_bdev1", 00:19:52.927 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:52.927 "strip_size_kb": 64, 00:19:52.927 "state": "online", 00:19:52.927 "raid_level": "raid5f", 00:19:52.927 "superblock": true, 00:19:52.927 "num_base_bdevs": 4, 00:19:52.927 "num_base_bdevs_discovered": 4, 00:19:52.927 "num_base_bdevs_operational": 4, 00:19:52.927 "base_bdevs_list": [ 00:19:52.927 { 00:19:52.927 "name": "spare", 00:19:52.927 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:52.927 "is_configured": true, 00:19:52.927 "data_offset": 2048, 00:19:52.927 "data_size": 63488 00:19:52.927 }, 00:19:52.927 { 00:19:52.927 "name": "BaseBdev2", 00:19:52.927 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:52.927 "is_configured": true, 00:19:52.927 "data_offset": 2048, 00:19:52.927 "data_size": 63488 00:19:52.927 }, 00:19:52.927 { 00:19:52.927 "name": "BaseBdev3", 00:19:52.927 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:52.927 "is_configured": true, 00:19:52.927 "data_offset": 2048, 00:19:52.927 "data_size": 63488 00:19:52.927 }, 00:19:52.927 { 00:19:52.927 "name": "BaseBdev4", 00:19:52.927 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:52.927 "is_configured": true, 00:19:52.927 "data_offset": 2048, 00:19:52.927 "data_size": 63488 00:19:52.927 } 00:19:52.927 ] 00:19:52.927 }' 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.927 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.185 [2024-11-15 10:48:23.542944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.185 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.185 "name": "raid_bdev1", 00:19:53.185 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:53.185 "strip_size_kb": 64, 00:19:53.185 "state": "online", 00:19:53.185 "raid_level": "raid5f", 00:19:53.185 "superblock": true, 00:19:53.185 "num_base_bdevs": 4, 00:19:53.185 "num_base_bdevs_discovered": 3, 00:19:53.185 "num_base_bdevs_operational": 3, 00:19:53.185 "base_bdevs_list": [ 00:19:53.185 { 00:19:53.185 "name": null, 00:19:53.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.185 "is_configured": false, 00:19:53.185 "data_offset": 0, 00:19:53.185 "data_size": 63488 00:19:53.185 }, 00:19:53.185 { 00:19:53.185 "name": "BaseBdev2", 00:19:53.185 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:53.185 "is_configured": true, 00:19:53.185 "data_offset": 2048, 00:19:53.185 "data_size": 63488 00:19:53.185 }, 00:19:53.185 { 00:19:53.185 "name": "BaseBdev3", 00:19:53.185 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:53.185 "is_configured": true, 00:19:53.185 "data_offset": 2048, 00:19:53.185 "data_size": 63488 00:19:53.185 }, 00:19:53.185 { 00:19:53.185 "name": "BaseBdev4", 00:19:53.186 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:53.186 "is_configured": true, 00:19:53.186 "data_offset": 2048, 00:19:53.186 "data_size": 63488 00:19:53.186 } 00:19:53.186 ] 00:19:53.186 }' 00:19:53.186 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.186 10:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.752 10:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:53.752 10:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.752 10:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.752 [2024-11-15 10:48:24.067106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.752 [2024-11-15 10:48:24.067365] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:53.752 [2024-11-15 10:48:24.067398] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:53.752 [2024-11-15 10:48:24.067441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.752 [2024-11-15 10:48:24.080130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:19:53.752 10:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.752 10:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:53.752 [2024-11-15 10:48:24.089218] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.692 "name": "raid_bdev1", 00:19:54.692 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:54.692 "strip_size_kb": 64, 00:19:54.692 "state": "online", 00:19:54.692 "raid_level": "raid5f", 00:19:54.692 "superblock": true, 00:19:54.692 "num_base_bdevs": 4, 00:19:54.692 "num_base_bdevs_discovered": 4, 00:19:54.692 "num_base_bdevs_operational": 4, 00:19:54.692 "process": { 00:19:54.692 "type": "rebuild", 00:19:54.692 "target": "spare", 00:19:54.692 "progress": { 00:19:54.692 "blocks": 17280, 00:19:54.692 "percent": 9 00:19:54.692 } 00:19:54.692 }, 00:19:54.692 "base_bdevs_list": [ 00:19:54.692 { 00:19:54.692 "name": "spare", 00:19:54.692 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:54.692 "is_configured": true, 00:19:54.692 "data_offset": 2048, 00:19:54.692 "data_size": 63488 00:19:54.692 }, 00:19:54.692 { 00:19:54.692 "name": "BaseBdev2", 00:19:54.692 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:54.692 "is_configured": true, 00:19:54.692 "data_offset": 2048, 00:19:54.692 "data_size": 63488 00:19:54.692 }, 00:19:54.692 { 00:19:54.692 "name": "BaseBdev3", 00:19:54.692 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:54.692 "is_configured": true, 00:19:54.692 "data_offset": 2048, 00:19:54.692 "data_size": 63488 00:19:54.692 }, 00:19:54.692 { 00:19:54.692 "name": "BaseBdev4", 00:19:54.692 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:54.692 "is_configured": true, 00:19:54.692 "data_offset": 2048, 00:19:54.692 "data_size": 63488 00:19:54.692 } 00:19:54.692 ] 00:19:54.692 }' 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.692 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.951 [2024-11-15 10:48:25.263139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.951 [2024-11-15 10:48:25.301239] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:54.951 [2024-11-15 10:48:25.301389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.951 [2024-11-15 10:48:25.301419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.951 [2024-11-15 10:48:25.301440] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.951 "name": "raid_bdev1", 00:19:54.951 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:54.951 "strip_size_kb": 64, 00:19:54.951 "state": "online", 00:19:54.951 "raid_level": "raid5f", 00:19:54.951 "superblock": true, 00:19:54.951 "num_base_bdevs": 4, 00:19:54.951 "num_base_bdevs_discovered": 3, 00:19:54.951 "num_base_bdevs_operational": 3, 00:19:54.951 "base_bdevs_list": [ 00:19:54.951 { 00:19:54.951 "name": null, 00:19:54.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.951 "is_configured": false, 00:19:54.951 "data_offset": 0, 00:19:54.951 "data_size": 63488 00:19:54.951 }, 00:19:54.951 { 00:19:54.951 "name": "BaseBdev2", 00:19:54.951 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:54.951 "is_configured": true, 00:19:54.951 "data_offset": 2048, 00:19:54.951 "data_size": 63488 00:19:54.951 }, 00:19:54.951 { 00:19:54.951 "name": "BaseBdev3", 00:19:54.951 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:54.951 "is_configured": true, 00:19:54.951 "data_offset": 2048, 00:19:54.951 "data_size": 63488 00:19:54.951 }, 00:19:54.951 { 00:19:54.951 "name": "BaseBdev4", 00:19:54.951 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:54.951 "is_configured": true, 00:19:54.951 "data_offset": 2048, 00:19:54.951 "data_size": 63488 00:19:54.951 } 00:19:54.951 ] 00:19:54.951 }' 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.951 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.537 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:55.537 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.537 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.537 [2024-11-15 10:48:25.899074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:55.537 [2024-11-15 10:48:25.899159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.537 [2024-11-15 10:48:25.899195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:55.537 [2024-11-15 10:48:25.899213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.537 [2024-11-15 10:48:25.899936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.537 [2024-11-15 10:48:25.899980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:55.537 [2024-11-15 10:48:25.900125] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:55.537 [2024-11-15 10:48:25.900152] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:55.537 [2024-11-15 10:48:25.900167] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:55.537 [2024-11-15 10:48:25.900220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.537 [2024-11-15 10:48:25.913209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:19:55.537 spare 00:19:55.537 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.537 10:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:55.537 [2024-11-15 10:48:25.922303] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.483 "name": "raid_bdev1", 00:19:56.483 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:56.483 "strip_size_kb": 64, 00:19:56.483 "state": "online", 00:19:56.483 "raid_level": "raid5f", 00:19:56.483 "superblock": true, 00:19:56.483 "num_base_bdevs": 4, 00:19:56.483 "num_base_bdevs_discovered": 4, 00:19:56.483 "num_base_bdevs_operational": 4, 00:19:56.483 "process": { 00:19:56.483 "type": "rebuild", 00:19:56.483 "target": "spare", 00:19:56.483 "progress": { 00:19:56.483 "blocks": 17280, 00:19:56.483 "percent": 9 00:19:56.483 } 00:19:56.483 }, 00:19:56.483 "base_bdevs_list": [ 00:19:56.483 { 00:19:56.483 "name": "spare", 00:19:56.483 "uuid": "a5eefbfe-3a41-56d0-94ec-263d38a4ddd1", 00:19:56.483 "is_configured": true, 00:19:56.483 "data_offset": 2048, 00:19:56.483 "data_size": 63488 00:19:56.483 }, 00:19:56.483 { 00:19:56.483 "name": "BaseBdev2", 00:19:56.483 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:56.483 "is_configured": true, 00:19:56.483 "data_offset": 2048, 00:19:56.483 "data_size": 63488 00:19:56.483 }, 00:19:56.483 { 00:19:56.483 "name": "BaseBdev3", 00:19:56.483 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:56.483 "is_configured": true, 00:19:56.483 "data_offset": 2048, 00:19:56.483 "data_size": 63488 00:19:56.483 }, 00:19:56.483 { 00:19:56.483 "name": "BaseBdev4", 00:19:56.483 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:56.483 "is_configured": true, 00:19:56.483 "data_offset": 2048, 00:19:56.483 "data_size": 63488 00:19:56.483 } 00:19:56.483 ] 00:19:56.483 }' 00:19:56.483 10:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.483 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.483 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.741 [2024-11-15 10:48:27.080585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.741 [2024-11-15 10:48:27.134540] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:56.741 [2024-11-15 10:48:27.134669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.741 [2024-11-15 10:48:27.134720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.741 [2024-11-15 10:48:27.134742] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.741 "name": "raid_bdev1", 00:19:56.741 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:56.741 "strip_size_kb": 64, 00:19:56.741 "state": "online", 00:19:56.741 "raid_level": "raid5f", 00:19:56.741 "superblock": true, 00:19:56.741 "num_base_bdevs": 4, 00:19:56.741 "num_base_bdevs_discovered": 3, 00:19:56.741 "num_base_bdevs_operational": 3, 00:19:56.741 "base_bdevs_list": [ 00:19:56.741 { 00:19:56.741 "name": null, 00:19:56.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.741 "is_configured": false, 00:19:56.741 "data_offset": 0, 00:19:56.741 "data_size": 63488 00:19:56.741 }, 00:19:56.741 { 00:19:56.741 "name": "BaseBdev2", 00:19:56.741 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:56.741 "is_configured": true, 00:19:56.741 "data_offset": 2048, 00:19:56.741 "data_size": 63488 00:19:56.741 }, 00:19:56.741 { 00:19:56.741 "name": "BaseBdev3", 00:19:56.741 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:56.741 "is_configured": true, 00:19:56.741 "data_offset": 2048, 00:19:56.741 "data_size": 63488 00:19:56.741 }, 00:19:56.741 { 00:19:56.741 "name": "BaseBdev4", 00:19:56.741 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:56.741 "is_configured": true, 00:19:56.741 "data_offset": 2048, 00:19:56.741 "data_size": 63488 00:19:56.741 } 00:19:56.741 ] 00:19:56.741 }' 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.741 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.307 "name": "raid_bdev1", 00:19:57.307 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:57.307 "strip_size_kb": 64, 00:19:57.307 "state": "online", 00:19:57.307 "raid_level": "raid5f", 00:19:57.307 "superblock": true, 00:19:57.307 "num_base_bdevs": 4, 00:19:57.307 "num_base_bdevs_discovered": 3, 00:19:57.307 "num_base_bdevs_operational": 3, 00:19:57.307 "base_bdevs_list": [ 00:19:57.307 { 00:19:57.307 "name": null, 00:19:57.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.307 "is_configured": false, 00:19:57.307 "data_offset": 0, 00:19:57.307 "data_size": 63488 00:19:57.307 }, 00:19:57.307 { 00:19:57.307 "name": "BaseBdev2", 00:19:57.307 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:57.307 "is_configured": true, 00:19:57.307 "data_offset": 2048, 00:19:57.307 "data_size": 63488 00:19:57.307 }, 00:19:57.307 { 00:19:57.307 "name": "BaseBdev3", 00:19:57.307 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:57.307 "is_configured": true, 00:19:57.307 "data_offset": 2048, 00:19:57.307 "data_size": 63488 00:19:57.307 }, 00:19:57.307 { 00:19:57.307 "name": "BaseBdev4", 00:19:57.307 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:57.307 "is_configured": true, 00:19:57.307 "data_offset": 2048, 00:19:57.307 "data_size": 63488 00:19:57.307 } 00:19:57.307 ] 00:19:57.307 }' 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.307 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.565 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.565 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:57.566 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.566 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.566 [2024-11-15 10:48:27.869715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:57.566 [2024-11-15 10:48:27.869874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.566 [2024-11-15 10:48:27.869943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:57.566 [2024-11-15 10:48:27.869986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.566 [2024-11-15 10:48:27.870805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.566 [2024-11-15 10:48:27.870865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:57.566 [2024-11-15 10:48:27.871059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:57.566 [2024-11-15 10:48:27.871101] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:57.566 [2024-11-15 10:48:27.871131] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:57.566 [2024-11-15 10:48:27.871153] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:57.566 BaseBdev1 00:19:57.566 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.566 10:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.501 "name": "raid_bdev1", 00:19:58.501 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:58.501 "strip_size_kb": 64, 00:19:58.501 "state": "online", 00:19:58.501 "raid_level": "raid5f", 00:19:58.501 "superblock": true, 00:19:58.501 "num_base_bdevs": 4, 00:19:58.501 "num_base_bdevs_discovered": 3, 00:19:58.501 "num_base_bdevs_operational": 3, 00:19:58.501 "base_bdevs_list": [ 00:19:58.501 { 00:19:58.501 "name": null, 00:19:58.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.501 "is_configured": false, 00:19:58.501 "data_offset": 0, 00:19:58.501 "data_size": 63488 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "name": "BaseBdev2", 00:19:58.501 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:58.501 "is_configured": true, 00:19:58.501 "data_offset": 2048, 00:19:58.501 "data_size": 63488 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "name": "BaseBdev3", 00:19:58.501 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:58.501 "is_configured": true, 00:19:58.501 "data_offset": 2048, 00:19:58.501 "data_size": 63488 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "name": "BaseBdev4", 00:19:58.501 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:58.501 "is_configured": true, 00:19:58.501 "data_offset": 2048, 00:19:58.501 "data_size": 63488 00:19:58.501 } 00:19:58.501 ] 00:19:58.501 }' 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.501 10:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.069 "name": "raid_bdev1", 00:19:59.069 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:19:59.069 "strip_size_kb": 64, 00:19:59.069 "state": "online", 00:19:59.069 "raid_level": "raid5f", 00:19:59.069 "superblock": true, 00:19:59.069 "num_base_bdevs": 4, 00:19:59.069 "num_base_bdevs_discovered": 3, 00:19:59.069 "num_base_bdevs_operational": 3, 00:19:59.069 "base_bdevs_list": [ 00:19:59.069 { 00:19:59.069 "name": null, 00:19:59.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.069 "is_configured": false, 00:19:59.069 "data_offset": 0, 00:19:59.069 "data_size": 63488 00:19:59.069 }, 00:19:59.069 { 00:19:59.069 "name": "BaseBdev2", 00:19:59.069 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:19:59.069 "is_configured": true, 00:19:59.069 "data_offset": 2048, 00:19:59.069 "data_size": 63488 00:19:59.069 }, 00:19:59.069 { 00:19:59.069 "name": "BaseBdev3", 00:19:59.069 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:19:59.069 "is_configured": true, 00:19:59.069 "data_offset": 2048, 00:19:59.069 "data_size": 63488 00:19:59.069 }, 00:19:59.069 { 00:19:59.069 "name": "BaseBdev4", 00:19:59.069 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:19:59.069 "is_configured": true, 00:19:59.069 "data_offset": 2048, 00:19:59.069 "data_size": 63488 00:19:59.069 } 00:19:59.069 ] 00:19:59.069 }' 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.069 [2024-11-15 10:48:29.606213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.069 [2024-11-15 10:48:29.606594] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:59.069 [2024-11-15 10:48:29.606627] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:59.069 request: 00:19:59.069 { 00:19:59.069 "base_bdev": "BaseBdev1", 00:19:59.069 "raid_bdev": "raid_bdev1", 00:19:59.069 "method": "bdev_raid_add_base_bdev", 00:19:59.069 "req_id": 1 00:19:59.069 } 00:19:59.069 Got JSON-RPC error response 00:19:59.069 response: 00:19:59.069 { 00:19:59.069 "code": -22, 00:19:59.069 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:59.069 } 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:59.069 10:48:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:00.442 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:00.442 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.442 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.442 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.442 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.442 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.442 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.442 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.442 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.442 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.443 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.443 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.443 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.443 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.443 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.443 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.443 "name": "raid_bdev1", 00:20:00.443 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:20:00.443 "strip_size_kb": 64, 00:20:00.443 "state": "online", 00:20:00.443 "raid_level": "raid5f", 00:20:00.443 "superblock": true, 00:20:00.443 "num_base_bdevs": 4, 00:20:00.443 "num_base_bdevs_discovered": 3, 00:20:00.443 "num_base_bdevs_operational": 3, 00:20:00.443 "base_bdevs_list": [ 00:20:00.443 { 00:20:00.443 "name": null, 00:20:00.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.443 "is_configured": false, 00:20:00.443 "data_offset": 0, 00:20:00.443 "data_size": 63488 00:20:00.443 }, 00:20:00.443 { 00:20:00.443 "name": "BaseBdev2", 00:20:00.443 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:20:00.443 "is_configured": true, 00:20:00.443 "data_offset": 2048, 00:20:00.443 "data_size": 63488 00:20:00.443 }, 00:20:00.443 { 00:20:00.443 "name": "BaseBdev3", 00:20:00.443 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:20:00.443 "is_configured": true, 00:20:00.443 "data_offset": 2048, 00:20:00.443 "data_size": 63488 00:20:00.443 }, 00:20:00.443 { 00:20:00.443 "name": "BaseBdev4", 00:20:00.443 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:20:00.443 "is_configured": true, 00:20:00.443 "data_offset": 2048, 00:20:00.443 "data_size": 63488 00:20:00.443 } 00:20:00.443 ] 00:20:00.443 }' 00:20:00.443 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.443 10:48:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.701 "name": "raid_bdev1", 00:20:00.701 "uuid": "9b8d4107-d2d1-41cd-97ad-b4c1197a7607", 00:20:00.701 "strip_size_kb": 64, 00:20:00.701 "state": "online", 00:20:00.701 "raid_level": "raid5f", 00:20:00.701 "superblock": true, 00:20:00.701 "num_base_bdevs": 4, 00:20:00.701 "num_base_bdevs_discovered": 3, 00:20:00.701 "num_base_bdevs_operational": 3, 00:20:00.701 "base_bdevs_list": [ 00:20:00.701 { 00:20:00.701 "name": null, 00:20:00.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.701 "is_configured": false, 00:20:00.701 "data_offset": 0, 00:20:00.701 "data_size": 63488 00:20:00.701 }, 00:20:00.701 { 00:20:00.701 "name": "BaseBdev2", 00:20:00.701 "uuid": "f45ef236-e7e9-592b-b938-b0f3ad178174", 00:20:00.701 "is_configured": true, 00:20:00.701 "data_offset": 2048, 00:20:00.701 "data_size": 63488 00:20:00.701 }, 00:20:00.701 { 00:20:00.701 "name": "BaseBdev3", 00:20:00.701 "uuid": "bfc2ee76-7ad2-5249-a141-15691c570d02", 00:20:00.701 "is_configured": true, 00:20:00.701 "data_offset": 2048, 00:20:00.701 "data_size": 63488 00:20:00.701 }, 00:20:00.701 { 00:20:00.701 "name": "BaseBdev4", 00:20:00.701 "uuid": "fa6b6343-b556-527f-8b54-782c525ceef0", 00:20:00.701 "is_configured": true, 00:20:00.701 "data_offset": 2048, 00:20:00.701 "data_size": 63488 00:20:00.701 } 00:20:00.701 ] 00:20:00.701 }' 00:20:00.701 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85681 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85681 ']' 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85681 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85681 00:20:00.960 killing process with pid 85681 00:20:00.960 Received shutdown signal, test time was about 60.000000 seconds 00:20:00.960 00:20:00.960 Latency(us) 00:20:00.960 [2024-11-15T10:48:31.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.960 [2024-11-15T10:48:31.520Z] =================================================================================================================== 00:20:00.960 [2024-11-15T10:48:31.520Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85681' 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85681 00:20:00.960 [2024-11-15 10:48:31.402045] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.960 10:48:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85681 00:20:00.960 [2024-11-15 10:48:31.402220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.960 [2024-11-15 10:48:31.402323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.960 [2024-11-15 10:48:31.402364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:01.525 [2024-11-15 10:48:31.823823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.459 10:48:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:02.459 00:20:02.459 real 0m29.604s 00:20:02.459 user 0m38.959s 00:20:02.459 sys 0m2.862s 00:20:02.459 10:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:02.459 10:48:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.459 ************************************ 00:20:02.459 END TEST raid5f_rebuild_test_sb 00:20:02.459 ************************************ 00:20:02.459 10:48:32 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:20:02.459 10:48:32 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:20:02.459 10:48:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:02.459 10:48:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:02.459 10:48:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:02.459 ************************************ 00:20:02.459 START TEST raid_state_function_test_sb_4k 00:20:02.459 ************************************ 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:02.459 Process raid pid: 86505 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86505 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86505' 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86505 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86505 ']' 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.459 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:02.460 10:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.460 [2024-11-15 10:48:33.014669] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:20:02.460 [2024-11-15 10:48:33.014827] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.717 [2024-11-15 10:48:33.191065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.975 [2024-11-15 10:48:33.301475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.975 [2024-11-15 10:48:33.485956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.975 [2024-11-15 10:48:33.486030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.910 [2024-11-15 10:48:34.158535] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:03.910 [2024-11-15 10:48:34.158618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:03.910 [2024-11-15 10:48:34.158646] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.910 [2024-11-15 10:48:34.158674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.910 "name": "Existed_Raid", 00:20:03.910 "uuid": "1d72454c-8f6b-4734-9f80-27916bd125be", 00:20:03.910 "strip_size_kb": 0, 00:20:03.910 "state": "configuring", 00:20:03.910 "raid_level": "raid1", 00:20:03.910 "superblock": true, 00:20:03.910 "num_base_bdevs": 2, 00:20:03.910 "num_base_bdevs_discovered": 0, 00:20:03.910 "num_base_bdevs_operational": 2, 00:20:03.910 "base_bdevs_list": [ 00:20:03.910 { 00:20:03.910 "name": "BaseBdev1", 00:20:03.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.910 "is_configured": false, 00:20:03.910 "data_offset": 0, 00:20:03.910 "data_size": 0 00:20:03.910 }, 00:20:03.910 { 00:20:03.910 "name": "BaseBdev2", 00:20:03.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.910 "is_configured": false, 00:20:03.910 "data_offset": 0, 00:20:03.910 "data_size": 0 00:20:03.910 } 00:20:03.910 ] 00:20:03.910 }' 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.910 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.168 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:04.168 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.169 [2024-11-15 10:48:34.670579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:04.169 [2024-11-15 10:48:34.670780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.169 [2024-11-15 10:48:34.682617] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:04.169 [2024-11-15 10:48:34.682685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:04.169 [2024-11-15 10:48:34.682703] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:04.169 [2024-11-15 10:48:34.682721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.169 [2024-11-15 10:48:34.723607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.169 BaseBdev1 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.169 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.426 [ 00:20:04.426 { 00:20:04.426 "name": "BaseBdev1", 00:20:04.426 "aliases": [ 00:20:04.426 "259b3a12-772f-4b99-833a-440d9af39e37" 00:20:04.426 ], 00:20:04.426 "product_name": "Malloc disk", 00:20:04.426 "block_size": 4096, 00:20:04.426 "num_blocks": 8192, 00:20:04.426 "uuid": "259b3a12-772f-4b99-833a-440d9af39e37", 00:20:04.426 "assigned_rate_limits": { 00:20:04.426 "rw_ios_per_sec": 0, 00:20:04.426 "rw_mbytes_per_sec": 0, 00:20:04.426 "r_mbytes_per_sec": 0, 00:20:04.426 "w_mbytes_per_sec": 0 00:20:04.426 }, 00:20:04.426 "claimed": true, 00:20:04.426 "claim_type": "exclusive_write", 00:20:04.426 "zoned": false, 00:20:04.426 "supported_io_types": { 00:20:04.426 "read": true, 00:20:04.426 "write": true, 00:20:04.426 "unmap": true, 00:20:04.426 "flush": true, 00:20:04.426 "reset": true, 00:20:04.426 "nvme_admin": false, 00:20:04.426 "nvme_io": false, 00:20:04.426 "nvme_io_md": false, 00:20:04.426 "write_zeroes": true, 00:20:04.426 "zcopy": true, 00:20:04.426 "get_zone_info": false, 00:20:04.426 "zone_management": false, 00:20:04.426 "zone_append": false, 00:20:04.426 "compare": false, 00:20:04.426 "compare_and_write": false, 00:20:04.426 "abort": true, 00:20:04.426 "seek_hole": false, 00:20:04.426 "seek_data": false, 00:20:04.426 "copy": true, 00:20:04.426 "nvme_iov_md": false 00:20:04.426 }, 00:20:04.426 "memory_domains": [ 00:20:04.426 { 00:20:04.426 "dma_device_id": "system", 00:20:04.426 "dma_device_type": 1 00:20:04.426 }, 00:20:04.426 { 00:20:04.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.426 "dma_device_type": 2 00:20:04.426 } 00:20:04.426 ], 00:20:04.426 "driver_specific": {} 00:20:04.426 } 00:20:04.426 ] 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.426 "name": "Existed_Raid", 00:20:04.426 "uuid": "a8b8bedd-f8b4-4c7f-9d74-1da95418f48d", 00:20:04.426 "strip_size_kb": 0, 00:20:04.426 "state": "configuring", 00:20:04.426 "raid_level": "raid1", 00:20:04.426 "superblock": true, 00:20:04.426 "num_base_bdevs": 2, 00:20:04.426 "num_base_bdevs_discovered": 1, 00:20:04.426 "num_base_bdevs_operational": 2, 00:20:04.426 "base_bdevs_list": [ 00:20:04.426 { 00:20:04.426 "name": "BaseBdev1", 00:20:04.426 "uuid": "259b3a12-772f-4b99-833a-440d9af39e37", 00:20:04.426 "is_configured": true, 00:20:04.426 "data_offset": 256, 00:20:04.426 "data_size": 7936 00:20:04.426 }, 00:20:04.426 { 00:20:04.426 "name": "BaseBdev2", 00:20:04.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.426 "is_configured": false, 00:20:04.426 "data_offset": 0, 00:20:04.426 "data_size": 0 00:20:04.426 } 00:20:04.426 ] 00:20:04.426 }' 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.426 10:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.990 [2024-11-15 10:48:35.255798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:04.990 [2024-11-15 10:48:35.255867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.990 [2024-11-15 10:48:35.263871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.990 [2024-11-15 10:48:35.266165] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:04.990 [2024-11-15 10:48:35.266222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.990 "name": "Existed_Raid", 00:20:04.990 "uuid": "66985560-ab55-4207-937c-64740e05da7d", 00:20:04.990 "strip_size_kb": 0, 00:20:04.990 "state": "configuring", 00:20:04.990 "raid_level": "raid1", 00:20:04.990 "superblock": true, 00:20:04.990 "num_base_bdevs": 2, 00:20:04.990 "num_base_bdevs_discovered": 1, 00:20:04.990 "num_base_bdevs_operational": 2, 00:20:04.990 "base_bdevs_list": [ 00:20:04.990 { 00:20:04.990 "name": "BaseBdev1", 00:20:04.990 "uuid": "259b3a12-772f-4b99-833a-440d9af39e37", 00:20:04.990 "is_configured": true, 00:20:04.990 "data_offset": 256, 00:20:04.990 "data_size": 7936 00:20:04.990 }, 00:20:04.990 { 00:20:04.990 "name": "BaseBdev2", 00:20:04.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.990 "is_configured": false, 00:20:04.990 "data_offset": 0, 00:20:04.990 "data_size": 0 00:20:04.990 } 00:20:04.990 ] 00:20:04.990 }' 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.990 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.556 [2024-11-15 10:48:35.871591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.556 [2024-11-15 10:48:35.871939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:05.556 [2024-11-15 10:48:35.871979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:05.556 BaseBdev2 00:20:05.556 [2024-11-15 10:48:35.872400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:05.556 [2024-11-15 10:48:35.872646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:05.556 [2024-11-15 10:48:35.872679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:05.556 [2024-11-15 10:48:35.872892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.556 [ 00:20:05.556 { 00:20:05.556 "name": "BaseBdev2", 00:20:05.556 "aliases": [ 00:20:05.556 "d6b0a7d0-7dff-4091-8312-2ead94a18173" 00:20:05.556 ], 00:20:05.556 "product_name": "Malloc disk", 00:20:05.556 "block_size": 4096, 00:20:05.556 "num_blocks": 8192, 00:20:05.556 "uuid": "d6b0a7d0-7dff-4091-8312-2ead94a18173", 00:20:05.556 "assigned_rate_limits": { 00:20:05.556 "rw_ios_per_sec": 0, 00:20:05.556 "rw_mbytes_per_sec": 0, 00:20:05.556 "r_mbytes_per_sec": 0, 00:20:05.556 "w_mbytes_per_sec": 0 00:20:05.556 }, 00:20:05.556 "claimed": true, 00:20:05.556 "claim_type": "exclusive_write", 00:20:05.556 "zoned": false, 00:20:05.556 "supported_io_types": { 00:20:05.556 "read": true, 00:20:05.556 "write": true, 00:20:05.556 "unmap": true, 00:20:05.556 "flush": true, 00:20:05.556 "reset": true, 00:20:05.556 "nvme_admin": false, 00:20:05.556 "nvme_io": false, 00:20:05.556 "nvme_io_md": false, 00:20:05.556 "write_zeroes": true, 00:20:05.556 "zcopy": true, 00:20:05.556 "get_zone_info": false, 00:20:05.556 "zone_management": false, 00:20:05.556 "zone_append": false, 00:20:05.556 "compare": false, 00:20:05.556 "compare_and_write": false, 00:20:05.556 "abort": true, 00:20:05.556 "seek_hole": false, 00:20:05.556 "seek_data": false, 00:20:05.556 "copy": true, 00:20:05.556 "nvme_iov_md": false 00:20:05.556 }, 00:20:05.556 "memory_domains": [ 00:20:05.556 { 00:20:05.556 "dma_device_id": "system", 00:20:05.556 "dma_device_type": 1 00:20:05.556 }, 00:20:05.556 { 00:20:05.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.556 "dma_device_type": 2 00:20:05.556 } 00:20:05.556 ], 00:20:05.556 "driver_specific": {} 00:20:05.556 } 00:20:05.556 ] 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.556 "name": "Existed_Raid", 00:20:05.556 "uuid": "66985560-ab55-4207-937c-64740e05da7d", 00:20:05.556 "strip_size_kb": 0, 00:20:05.556 "state": "online", 00:20:05.556 "raid_level": "raid1", 00:20:05.556 "superblock": true, 00:20:05.556 "num_base_bdevs": 2, 00:20:05.556 "num_base_bdevs_discovered": 2, 00:20:05.556 "num_base_bdevs_operational": 2, 00:20:05.556 "base_bdevs_list": [ 00:20:05.556 { 00:20:05.556 "name": "BaseBdev1", 00:20:05.556 "uuid": "259b3a12-772f-4b99-833a-440d9af39e37", 00:20:05.556 "is_configured": true, 00:20:05.556 "data_offset": 256, 00:20:05.556 "data_size": 7936 00:20:05.556 }, 00:20:05.556 { 00:20:05.556 "name": "BaseBdev2", 00:20:05.556 "uuid": "d6b0a7d0-7dff-4091-8312-2ead94a18173", 00:20:05.556 "is_configured": true, 00:20:05.556 "data_offset": 256, 00:20:05.556 "data_size": 7936 00:20:05.556 } 00:20:05.556 ] 00:20:05.556 }' 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.556 10:48:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.122 [2024-11-15 10:48:36.452110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.122 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:06.122 "name": "Existed_Raid", 00:20:06.122 "aliases": [ 00:20:06.122 "66985560-ab55-4207-937c-64740e05da7d" 00:20:06.122 ], 00:20:06.122 "product_name": "Raid Volume", 00:20:06.122 "block_size": 4096, 00:20:06.122 "num_blocks": 7936, 00:20:06.122 "uuid": "66985560-ab55-4207-937c-64740e05da7d", 00:20:06.122 "assigned_rate_limits": { 00:20:06.122 "rw_ios_per_sec": 0, 00:20:06.122 "rw_mbytes_per_sec": 0, 00:20:06.122 "r_mbytes_per_sec": 0, 00:20:06.122 "w_mbytes_per_sec": 0 00:20:06.122 }, 00:20:06.122 "claimed": false, 00:20:06.122 "zoned": false, 00:20:06.122 "supported_io_types": { 00:20:06.122 "read": true, 00:20:06.122 "write": true, 00:20:06.122 "unmap": false, 00:20:06.122 "flush": false, 00:20:06.122 "reset": true, 00:20:06.122 "nvme_admin": false, 00:20:06.122 "nvme_io": false, 00:20:06.122 "nvme_io_md": false, 00:20:06.122 "write_zeroes": true, 00:20:06.122 "zcopy": false, 00:20:06.122 "get_zone_info": false, 00:20:06.122 "zone_management": false, 00:20:06.122 "zone_append": false, 00:20:06.122 "compare": false, 00:20:06.122 "compare_and_write": false, 00:20:06.122 "abort": false, 00:20:06.122 "seek_hole": false, 00:20:06.122 "seek_data": false, 00:20:06.122 "copy": false, 00:20:06.122 "nvme_iov_md": false 00:20:06.122 }, 00:20:06.122 "memory_domains": [ 00:20:06.122 { 00:20:06.122 "dma_device_id": "system", 00:20:06.122 "dma_device_type": 1 00:20:06.122 }, 00:20:06.122 { 00:20:06.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.122 "dma_device_type": 2 00:20:06.122 }, 00:20:06.122 { 00:20:06.122 "dma_device_id": "system", 00:20:06.122 "dma_device_type": 1 00:20:06.122 }, 00:20:06.122 { 00:20:06.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.122 "dma_device_type": 2 00:20:06.122 } 00:20:06.122 ], 00:20:06.122 "driver_specific": { 00:20:06.122 "raid": { 00:20:06.122 "uuid": "66985560-ab55-4207-937c-64740e05da7d", 00:20:06.122 "strip_size_kb": 0, 00:20:06.122 "state": "online", 00:20:06.122 "raid_level": "raid1", 00:20:06.122 "superblock": true, 00:20:06.122 "num_base_bdevs": 2, 00:20:06.122 "num_base_bdevs_discovered": 2, 00:20:06.122 "num_base_bdevs_operational": 2, 00:20:06.122 "base_bdevs_list": [ 00:20:06.122 { 00:20:06.122 "name": "BaseBdev1", 00:20:06.122 "uuid": "259b3a12-772f-4b99-833a-440d9af39e37", 00:20:06.122 "is_configured": true, 00:20:06.122 "data_offset": 256, 00:20:06.122 "data_size": 7936 00:20:06.122 }, 00:20:06.122 { 00:20:06.122 "name": "BaseBdev2", 00:20:06.122 "uuid": "d6b0a7d0-7dff-4091-8312-2ead94a18173", 00:20:06.122 "is_configured": true, 00:20:06.122 "data_offset": 256, 00:20:06.122 "data_size": 7936 00:20:06.122 } 00:20:06.122 ] 00:20:06.122 } 00:20:06.122 } 00:20:06.122 }' 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:06.123 BaseBdev2' 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:06.123 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.381 [2024-11-15 10:48:36.744158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:06.381 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.382 "name": "Existed_Raid", 00:20:06.382 "uuid": "66985560-ab55-4207-937c-64740e05da7d", 00:20:06.382 "strip_size_kb": 0, 00:20:06.382 "state": "online", 00:20:06.382 "raid_level": "raid1", 00:20:06.382 "superblock": true, 00:20:06.382 "num_base_bdevs": 2, 00:20:06.382 "num_base_bdevs_discovered": 1, 00:20:06.382 "num_base_bdevs_operational": 1, 00:20:06.382 "base_bdevs_list": [ 00:20:06.382 { 00:20:06.382 "name": null, 00:20:06.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.382 "is_configured": false, 00:20:06.382 "data_offset": 0, 00:20:06.382 "data_size": 7936 00:20:06.382 }, 00:20:06.382 { 00:20:06.382 "name": "BaseBdev2", 00:20:06.382 "uuid": "d6b0a7d0-7dff-4091-8312-2ead94a18173", 00:20:06.382 "is_configured": true, 00:20:06.382 "data_offset": 256, 00:20:06.382 "data_size": 7936 00:20:06.382 } 00:20:06.382 ] 00:20:06.382 }' 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.382 10:48:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.973 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.973 [2024-11-15 10:48:37.470429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:06.973 [2024-11-15 10:48:37.470567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:07.232 [2024-11-15 10:48:37.552008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.232 [2024-11-15 10:48:37.552090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:07.233 [2024-11-15 10:48:37.552110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86505 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86505 ']' 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86505 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86505 00:20:07.233 killing process with pid 86505 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86505' 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86505 00:20:07.233 10:48:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86505 00:20:07.233 [2024-11-15 10:48:37.639266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:07.233 [2024-11-15 10:48:37.653947] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:08.607 10:48:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:08.607 00:20:08.607 real 0m5.886s 00:20:08.607 user 0m9.046s 00:20:08.607 sys 0m0.689s 00:20:08.607 10:48:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:08.607 10:48:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.607 ************************************ 00:20:08.607 END TEST raid_state_function_test_sb_4k 00:20:08.607 ************************************ 00:20:08.607 10:48:38 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:08.607 10:48:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:08.607 10:48:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:08.607 10:48:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:08.607 ************************************ 00:20:08.607 START TEST raid_superblock_test_4k 00:20:08.607 ************************************ 00:20:08.607 10:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:20:08.607 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:08.607 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86764 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86764 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86764 ']' 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:08.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:08.608 10:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.608 [2024-11-15 10:48:38.974860] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:20:08.608 [2024-11-15 10:48:38.975117] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86764 ] 00:20:08.608 [2024-11-15 10:48:39.158079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.866 [2024-11-15 10:48:39.341971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.124 [2024-11-15 10:48:39.534380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.124 [2024-11-15 10:48:39.534482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.691 malloc1 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.691 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.691 [2024-11-15 10:48:40.151407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:09.691 [2024-11-15 10:48:40.151484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.691 [2024-11-15 10:48:40.151521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:09.692 [2024-11-15 10:48:40.151539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.692 [2024-11-15 10:48:40.154307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.692 [2024-11-15 10:48:40.154374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:09.692 pt1 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.692 malloc2 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.692 [2024-11-15 10:48:40.205512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.692 [2024-11-15 10:48:40.205617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.692 [2024-11-15 10:48:40.205665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:09.692 [2024-11-15 10:48:40.205681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.692 [2024-11-15 10:48:40.208684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.692 [2024-11-15 10:48:40.208739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.692 pt2 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.692 [2024-11-15 10:48:40.217763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:09.692 [2024-11-15 10:48:40.220183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.692 [2024-11-15 10:48:40.220488] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:09.692 [2024-11-15 10:48:40.220516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:09.692 [2024-11-15 10:48:40.220896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:09.692 [2024-11-15 10:48:40.221154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:09.692 [2024-11-15 10:48:40.221184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:09.692 [2024-11-15 10:48:40.221445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.692 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.951 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.951 "name": "raid_bdev1", 00:20:09.951 "uuid": "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b", 00:20:09.951 "strip_size_kb": 0, 00:20:09.951 "state": "online", 00:20:09.951 "raid_level": "raid1", 00:20:09.951 "superblock": true, 00:20:09.951 "num_base_bdevs": 2, 00:20:09.951 "num_base_bdevs_discovered": 2, 00:20:09.951 "num_base_bdevs_operational": 2, 00:20:09.951 "base_bdevs_list": [ 00:20:09.951 { 00:20:09.951 "name": "pt1", 00:20:09.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.951 "is_configured": true, 00:20:09.951 "data_offset": 256, 00:20:09.951 "data_size": 7936 00:20:09.951 }, 00:20:09.951 { 00:20:09.951 "name": "pt2", 00:20:09.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.951 "is_configured": true, 00:20:09.951 "data_offset": 256, 00:20:09.951 "data_size": 7936 00:20:09.951 } 00:20:09.951 ] 00:20:09.951 }' 00:20:09.951 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.951 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.517 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:10.517 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:10.517 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:10.517 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:10.517 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:10.517 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:10.517 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:10.518 [2024-11-15 10:48:40.778191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:10.518 "name": "raid_bdev1", 00:20:10.518 "aliases": [ 00:20:10.518 "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b" 00:20:10.518 ], 00:20:10.518 "product_name": "Raid Volume", 00:20:10.518 "block_size": 4096, 00:20:10.518 "num_blocks": 7936, 00:20:10.518 "uuid": "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b", 00:20:10.518 "assigned_rate_limits": { 00:20:10.518 "rw_ios_per_sec": 0, 00:20:10.518 "rw_mbytes_per_sec": 0, 00:20:10.518 "r_mbytes_per_sec": 0, 00:20:10.518 "w_mbytes_per_sec": 0 00:20:10.518 }, 00:20:10.518 "claimed": false, 00:20:10.518 "zoned": false, 00:20:10.518 "supported_io_types": { 00:20:10.518 "read": true, 00:20:10.518 "write": true, 00:20:10.518 "unmap": false, 00:20:10.518 "flush": false, 00:20:10.518 "reset": true, 00:20:10.518 "nvme_admin": false, 00:20:10.518 "nvme_io": false, 00:20:10.518 "nvme_io_md": false, 00:20:10.518 "write_zeroes": true, 00:20:10.518 "zcopy": false, 00:20:10.518 "get_zone_info": false, 00:20:10.518 "zone_management": false, 00:20:10.518 "zone_append": false, 00:20:10.518 "compare": false, 00:20:10.518 "compare_and_write": false, 00:20:10.518 "abort": false, 00:20:10.518 "seek_hole": false, 00:20:10.518 "seek_data": false, 00:20:10.518 "copy": false, 00:20:10.518 "nvme_iov_md": false 00:20:10.518 }, 00:20:10.518 "memory_domains": [ 00:20:10.518 { 00:20:10.518 "dma_device_id": "system", 00:20:10.518 "dma_device_type": 1 00:20:10.518 }, 00:20:10.518 { 00:20:10.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.518 "dma_device_type": 2 00:20:10.518 }, 00:20:10.518 { 00:20:10.518 "dma_device_id": "system", 00:20:10.518 "dma_device_type": 1 00:20:10.518 }, 00:20:10.518 { 00:20:10.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.518 "dma_device_type": 2 00:20:10.518 } 00:20:10.518 ], 00:20:10.518 "driver_specific": { 00:20:10.518 "raid": { 00:20:10.518 "uuid": "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b", 00:20:10.518 "strip_size_kb": 0, 00:20:10.518 "state": "online", 00:20:10.518 "raid_level": "raid1", 00:20:10.518 "superblock": true, 00:20:10.518 "num_base_bdevs": 2, 00:20:10.518 "num_base_bdevs_discovered": 2, 00:20:10.518 "num_base_bdevs_operational": 2, 00:20:10.518 "base_bdevs_list": [ 00:20:10.518 { 00:20:10.518 "name": "pt1", 00:20:10.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.518 "is_configured": true, 00:20:10.518 "data_offset": 256, 00:20:10.518 "data_size": 7936 00:20:10.518 }, 00:20:10.518 { 00:20:10.518 "name": "pt2", 00:20:10.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.518 "is_configured": true, 00:20:10.518 "data_offset": 256, 00:20:10.518 "data_size": 7936 00:20:10.518 } 00:20:10.518 ] 00:20:10.518 } 00:20:10.518 } 00:20:10.518 }' 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:10.518 pt2' 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.518 10:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.518 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.518 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:10.518 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:10.518 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:10.518 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:10.518 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.518 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.518 [2024-11-15 10:48:41.062282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eb4a0e76-13e2-4ca8-9466-bc87a9ac258b 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z eb4a0e76-13e2-4ca8-9466-bc87a9ac258b ']' 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.777 [2024-11-15 10:48:41.117895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:10.777 [2024-11-15 10:48:41.117940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:10.777 [2024-11-15 10:48:41.118057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.777 [2024-11-15 10:48:41.118140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:10.777 [2024-11-15 10:48:41.118161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.777 [2024-11-15 10:48:41.265986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:10.777 [2024-11-15 10:48:41.268546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:10.777 [2024-11-15 10:48:41.268691] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:10.777 [2024-11-15 10:48:41.268815] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:10.777 [2024-11-15 10:48:41.268863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:10.777 [2024-11-15 10:48:41.268894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:10.777 request: 00:20:10.777 { 00:20:10.777 "name": "raid_bdev1", 00:20:10.777 "raid_level": "raid1", 00:20:10.777 "base_bdevs": [ 00:20:10.777 "malloc1", 00:20:10.777 "malloc2" 00:20:10.777 ], 00:20:10.777 "superblock": false, 00:20:10.777 "method": "bdev_raid_create", 00:20:10.777 "req_id": 1 00:20:10.777 } 00:20:10.777 Got JSON-RPC error response 00:20:10.777 response: 00:20:10.777 { 00:20:10.777 "code": -17, 00:20:10.777 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:10.777 } 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.777 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.778 [2024-11-15 10:48:41.326016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:10.778 [2024-11-15 10:48:41.326117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.778 [2024-11-15 10:48:41.326159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:10.778 [2024-11-15 10:48:41.326178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.778 [2024-11-15 10:48:41.329217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.778 [2024-11-15 10:48:41.329302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:10.778 [2024-11-15 10:48:41.329505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:10.778 [2024-11-15 10:48:41.329635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.778 pt1 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.778 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.037 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.037 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.037 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.037 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:11.037 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.037 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.037 "name": "raid_bdev1", 00:20:11.037 "uuid": "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b", 00:20:11.037 "strip_size_kb": 0, 00:20:11.037 "state": "configuring", 00:20:11.037 "raid_level": "raid1", 00:20:11.037 "superblock": true, 00:20:11.037 "num_base_bdevs": 2, 00:20:11.037 "num_base_bdevs_discovered": 1, 00:20:11.037 "num_base_bdevs_operational": 2, 00:20:11.037 "base_bdevs_list": [ 00:20:11.037 { 00:20:11.037 "name": "pt1", 00:20:11.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.037 "is_configured": true, 00:20:11.037 "data_offset": 256, 00:20:11.037 "data_size": 7936 00:20:11.037 }, 00:20:11.037 { 00:20:11.037 "name": null, 00:20:11.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.037 "is_configured": false, 00:20:11.037 "data_offset": 256, 00:20:11.037 "data_size": 7936 00:20:11.037 } 00:20:11.037 ] 00:20:11.037 }' 00:20:11.037 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.037 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:11.604 [2024-11-15 10:48:41.906152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:11.604 [2024-11-15 10:48:41.906241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.604 [2024-11-15 10:48:41.906276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:11.604 [2024-11-15 10:48:41.906294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.604 [2024-11-15 10:48:41.906925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.604 [2024-11-15 10:48:41.906980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:11.604 [2024-11-15 10:48:41.907089] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:11.604 [2024-11-15 10:48:41.907130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:11.604 [2024-11-15 10:48:41.907289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:11.604 [2024-11-15 10:48:41.907311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:11.604 [2024-11-15 10:48:41.907668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:11.604 [2024-11-15 10:48:41.907879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:11.604 [2024-11-15 10:48:41.907895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:11.604 [2024-11-15 10:48:41.908078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.604 pt2 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.604 "name": "raid_bdev1", 00:20:11.604 "uuid": "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b", 00:20:11.604 "strip_size_kb": 0, 00:20:11.604 "state": "online", 00:20:11.604 "raid_level": "raid1", 00:20:11.604 "superblock": true, 00:20:11.604 "num_base_bdevs": 2, 00:20:11.604 "num_base_bdevs_discovered": 2, 00:20:11.604 "num_base_bdevs_operational": 2, 00:20:11.604 "base_bdevs_list": [ 00:20:11.604 { 00:20:11.604 "name": "pt1", 00:20:11.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.604 "is_configured": true, 00:20:11.604 "data_offset": 256, 00:20:11.604 "data_size": 7936 00:20:11.604 }, 00:20:11.604 { 00:20:11.604 "name": "pt2", 00:20:11.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.604 "is_configured": true, 00:20:11.604 "data_offset": 256, 00:20:11.604 "data_size": 7936 00:20:11.604 } 00:20:11.604 ] 00:20:11.604 }' 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.604 10:48:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.220 [2024-11-15 10:48:42.494651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.220 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:12.220 "name": "raid_bdev1", 00:20:12.220 "aliases": [ 00:20:12.220 "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b" 00:20:12.220 ], 00:20:12.220 "product_name": "Raid Volume", 00:20:12.220 "block_size": 4096, 00:20:12.220 "num_blocks": 7936, 00:20:12.220 "uuid": "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b", 00:20:12.220 "assigned_rate_limits": { 00:20:12.220 "rw_ios_per_sec": 0, 00:20:12.220 "rw_mbytes_per_sec": 0, 00:20:12.220 "r_mbytes_per_sec": 0, 00:20:12.220 "w_mbytes_per_sec": 0 00:20:12.220 }, 00:20:12.220 "claimed": false, 00:20:12.220 "zoned": false, 00:20:12.220 "supported_io_types": { 00:20:12.220 "read": true, 00:20:12.220 "write": true, 00:20:12.220 "unmap": false, 00:20:12.220 "flush": false, 00:20:12.220 "reset": true, 00:20:12.220 "nvme_admin": false, 00:20:12.220 "nvme_io": false, 00:20:12.220 "nvme_io_md": false, 00:20:12.220 "write_zeroes": true, 00:20:12.220 "zcopy": false, 00:20:12.220 "get_zone_info": false, 00:20:12.221 "zone_management": false, 00:20:12.221 "zone_append": false, 00:20:12.221 "compare": false, 00:20:12.221 "compare_and_write": false, 00:20:12.221 "abort": false, 00:20:12.221 "seek_hole": false, 00:20:12.221 "seek_data": false, 00:20:12.221 "copy": false, 00:20:12.221 "nvme_iov_md": false 00:20:12.221 }, 00:20:12.221 "memory_domains": [ 00:20:12.221 { 00:20:12.221 "dma_device_id": "system", 00:20:12.221 "dma_device_type": 1 00:20:12.221 }, 00:20:12.221 { 00:20:12.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.221 "dma_device_type": 2 00:20:12.221 }, 00:20:12.221 { 00:20:12.221 "dma_device_id": "system", 00:20:12.221 "dma_device_type": 1 00:20:12.221 }, 00:20:12.221 { 00:20:12.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.221 "dma_device_type": 2 00:20:12.221 } 00:20:12.221 ], 00:20:12.221 "driver_specific": { 00:20:12.221 "raid": { 00:20:12.221 "uuid": "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b", 00:20:12.221 "strip_size_kb": 0, 00:20:12.221 "state": "online", 00:20:12.221 "raid_level": "raid1", 00:20:12.221 "superblock": true, 00:20:12.221 "num_base_bdevs": 2, 00:20:12.221 "num_base_bdevs_discovered": 2, 00:20:12.221 "num_base_bdevs_operational": 2, 00:20:12.221 "base_bdevs_list": [ 00:20:12.221 { 00:20:12.221 "name": "pt1", 00:20:12.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:12.221 "is_configured": true, 00:20:12.221 "data_offset": 256, 00:20:12.221 "data_size": 7936 00:20:12.221 }, 00:20:12.221 { 00:20:12.221 "name": "pt2", 00:20:12.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:12.221 "is_configured": true, 00:20:12.221 "data_offset": 256, 00:20:12.221 "data_size": 7936 00:20:12.221 } 00:20:12.221 ] 00:20:12.221 } 00:20:12.221 } 00:20:12.221 }' 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:12.221 pt2' 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.221 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.479 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:12.480 [2024-11-15 10:48:42.798741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' eb4a0e76-13e2-4ca8-9466-bc87a9ac258b '!=' eb4a0e76-13e2-4ca8-9466-bc87a9ac258b ']' 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.480 [2024-11-15 10:48:42.858523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.480 "name": "raid_bdev1", 00:20:12.480 "uuid": "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b", 00:20:12.480 "strip_size_kb": 0, 00:20:12.480 "state": "online", 00:20:12.480 "raid_level": "raid1", 00:20:12.480 "superblock": true, 00:20:12.480 "num_base_bdevs": 2, 00:20:12.480 "num_base_bdevs_discovered": 1, 00:20:12.480 "num_base_bdevs_operational": 1, 00:20:12.480 "base_bdevs_list": [ 00:20:12.480 { 00:20:12.480 "name": null, 00:20:12.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.480 "is_configured": false, 00:20:12.480 "data_offset": 0, 00:20:12.480 "data_size": 7936 00:20:12.480 }, 00:20:12.480 { 00:20:12.480 "name": "pt2", 00:20:12.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:12.480 "is_configured": true, 00:20:12.480 "data_offset": 256, 00:20:12.480 "data_size": 7936 00:20:12.480 } 00:20:12.480 ] 00:20:12.480 }' 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.480 10:48:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.046 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:13.046 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.046 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.046 [2024-11-15 10:48:43.430627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.046 [2024-11-15 10:48:43.430668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.046 [2024-11-15 10:48:43.430774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.046 [2024-11-15 10:48:43.430844] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.047 [2024-11-15 10:48:43.430864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.047 [2024-11-15 10:48:43.514676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:13.047 [2024-11-15 10:48:43.514772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.047 [2024-11-15 10:48:43.514803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:13.047 [2024-11-15 10:48:43.514822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.047 [2024-11-15 10:48:43.517659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.047 [2024-11-15 10:48:43.517721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:13.047 [2024-11-15 10:48:43.517847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:13.047 [2024-11-15 10:48:43.517917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:13.047 [2024-11-15 10:48:43.518059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:13.047 [2024-11-15 10:48:43.518091] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:13.047 [2024-11-15 10:48:43.518455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:13.047 [2024-11-15 10:48:43.518710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:13.047 [2024-11-15 10:48:43.518728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:13.047 [2024-11-15 10:48:43.519004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.047 pt2 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.047 "name": "raid_bdev1", 00:20:13.047 "uuid": "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b", 00:20:13.047 "strip_size_kb": 0, 00:20:13.047 "state": "online", 00:20:13.047 "raid_level": "raid1", 00:20:13.047 "superblock": true, 00:20:13.047 "num_base_bdevs": 2, 00:20:13.047 "num_base_bdevs_discovered": 1, 00:20:13.047 "num_base_bdevs_operational": 1, 00:20:13.047 "base_bdevs_list": [ 00:20:13.047 { 00:20:13.047 "name": null, 00:20:13.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.047 "is_configured": false, 00:20:13.047 "data_offset": 256, 00:20:13.047 "data_size": 7936 00:20:13.047 }, 00:20:13.047 { 00:20:13.047 "name": "pt2", 00:20:13.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:13.047 "is_configured": true, 00:20:13.047 "data_offset": 256, 00:20:13.047 "data_size": 7936 00:20:13.047 } 00:20:13.047 ] 00:20:13.047 }' 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.047 10:48:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.614 [2024-11-15 10:48:44.099093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.614 [2024-11-15 10:48:44.099142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.614 [2024-11-15 10:48:44.099241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.614 [2024-11-15 10:48:44.099320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.614 [2024-11-15 10:48:44.099338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.614 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.614 [2024-11-15 10:48:44.171174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:13.614 [2024-11-15 10:48:44.171272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.614 [2024-11-15 10:48:44.171309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:13.614 [2024-11-15 10:48:44.171325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.873 [2024-11-15 10:48:44.174269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.873 [2024-11-15 10:48:44.174371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:13.873 [2024-11-15 10:48:44.174538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:13.873 [2024-11-15 10:48:44.174623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:13.873 [2024-11-15 10:48:44.174832] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:13.873 [2024-11-15 10:48:44.174852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.873 [2024-11-15 10:48:44.174878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:13.873 [2024-11-15 10:48:44.174968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:13.873 [2024-11-15 10:48:44.175090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:13.873 [2024-11-15 10:48:44.175107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:13.873 [2024-11-15 10:48:44.175528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:13.873 [2024-11-15 10:48:44.175742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:13.873 [2024-11-15 10:48:44.175765] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:13.873 [2024-11-15 10:48:44.176023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.873 pt1 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.873 "name": "raid_bdev1", 00:20:13.873 "uuid": "eb4a0e76-13e2-4ca8-9466-bc87a9ac258b", 00:20:13.873 "strip_size_kb": 0, 00:20:13.873 "state": "online", 00:20:13.873 "raid_level": "raid1", 00:20:13.873 "superblock": true, 00:20:13.873 "num_base_bdevs": 2, 00:20:13.873 "num_base_bdevs_discovered": 1, 00:20:13.873 "num_base_bdevs_operational": 1, 00:20:13.873 "base_bdevs_list": [ 00:20:13.873 { 00:20:13.873 "name": null, 00:20:13.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.873 "is_configured": false, 00:20:13.873 "data_offset": 256, 00:20:13.873 "data_size": 7936 00:20:13.873 }, 00:20:13.873 { 00:20:13.873 "name": "pt2", 00:20:13.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:13.873 "is_configured": true, 00:20:13.873 "data_offset": 256, 00:20:13.873 "data_size": 7936 00:20:13.873 } 00:20:13.873 ] 00:20:13.873 }' 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.873 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:14.441 [2024-11-15 10:48:44.779661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' eb4a0e76-13e2-4ca8-9466-bc87a9ac258b '!=' eb4a0e76-13e2-4ca8-9466-bc87a9ac258b ']' 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86764 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86764 ']' 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86764 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86764 00:20:14.441 killing process with pid 86764 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86764' 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86764 00:20:14.441 [2024-11-15 10:48:44.863694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.441 10:48:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86764 00:20:14.441 [2024-11-15 10:48:44.863820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.441 [2024-11-15 10:48:44.863891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.441 [2024-11-15 10:48:44.863917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:14.700 [2024-11-15 10:48:45.040165] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:15.638 ************************************ 00:20:15.638 END TEST raid_superblock_test_4k 00:20:15.638 ************************************ 00:20:15.638 10:48:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:20:15.638 00:20:15.638 real 0m7.215s 00:20:15.638 user 0m11.716s 00:20:15.638 sys 0m0.860s 00:20:15.638 10:48:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:15.638 10:48:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.638 10:48:46 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:20:15.638 10:48:46 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:20:15.638 10:48:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:15.638 10:48:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:15.638 10:48:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:15.638 ************************************ 00:20:15.638 START TEST raid_rebuild_test_sb_4k 00:20:15.638 ************************************ 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87097 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87097 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 87097 ']' 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:15.638 10:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.897 [2024-11-15 10:48:46.206270] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:20:15.897 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:15.897 Zero copy mechanism will not be used. 00:20:15.897 [2024-11-15 10:48:46.206448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87097 ] 00:20:15.897 [2024-11-15 10:48:46.377335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.156 [2024-11-15 10:48:46.480340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.156 [2024-11-15 10:48:46.661092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.156 [2024-11-15 10:48:46.661176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.724 BaseBdev1_malloc 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.724 [2024-11-15 10:48:47.210368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:16.724 [2024-11-15 10:48:47.210498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.724 [2024-11-15 10:48:47.210559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:16.724 [2024-11-15 10:48:47.210590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.724 [2024-11-15 10:48:47.213739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.724 [2024-11-15 10:48:47.213795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:16.724 BaseBdev1 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.724 BaseBdev2_malloc 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.724 [2024-11-15 10:48:47.256208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:16.724 [2024-11-15 10:48:47.256292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.724 [2024-11-15 10:48:47.256327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:16.724 [2024-11-15 10:48:47.256362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.724 [2024-11-15 10:48:47.259538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.724 [2024-11-15 10:48:47.259597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:16.724 BaseBdev2 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.724 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.983 spare_malloc 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.984 spare_delay 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.984 [2024-11-15 10:48:47.326272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:16.984 [2024-11-15 10:48:47.326375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.984 [2024-11-15 10:48:47.326410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:16.984 [2024-11-15 10:48:47.326428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.984 [2024-11-15 10:48:47.329159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.984 [2024-11-15 10:48:47.329211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:16.984 spare 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.984 [2024-11-15 10:48:47.334367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.984 [2024-11-15 10:48:47.336654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.984 [2024-11-15 10:48:47.336910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:16.984 [2024-11-15 10:48:47.336943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:16.984 [2024-11-15 10:48:47.337291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:16.984 [2024-11-15 10:48:47.337544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:16.984 [2024-11-15 10:48:47.337570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:16.984 [2024-11-15 10:48:47.337779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.984 "name": "raid_bdev1", 00:20:16.984 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:16.984 "strip_size_kb": 0, 00:20:16.984 "state": "online", 00:20:16.984 "raid_level": "raid1", 00:20:16.984 "superblock": true, 00:20:16.984 "num_base_bdevs": 2, 00:20:16.984 "num_base_bdevs_discovered": 2, 00:20:16.984 "num_base_bdevs_operational": 2, 00:20:16.984 "base_bdevs_list": [ 00:20:16.984 { 00:20:16.984 "name": "BaseBdev1", 00:20:16.984 "uuid": "f8967dc2-a4b7-5068-8c1d-0d821999a979", 00:20:16.984 "is_configured": true, 00:20:16.984 "data_offset": 256, 00:20:16.984 "data_size": 7936 00:20:16.984 }, 00:20:16.984 { 00:20:16.984 "name": "BaseBdev2", 00:20:16.984 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:16.984 "is_configured": true, 00:20:16.984 "data_offset": 256, 00:20:16.984 "data_size": 7936 00:20:16.984 } 00:20:16.984 ] 00:20:16.984 }' 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.984 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.553 [2024-11-15 10:48:47.850820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:17.553 10:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:17.812 [2024-11-15 10:48:48.306697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:17.812 /dev/nbd0 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:17.812 1+0 records in 00:20:17.812 1+0 records out 00:20:17.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319782 s, 12.8 MB/s 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:17.812 10:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:19.192 7936+0 records in 00:20:19.192 7936+0 records out 00:20:19.192 32505856 bytes (33 MB, 31 MiB) copied, 0.995675 s, 32.6 MB/s 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:19.192 [2024-11-15 10:48:49.698783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.192 [2024-11-15 10:48:49.710888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.192 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.451 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.451 "name": "raid_bdev1", 00:20:19.451 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:19.451 "strip_size_kb": 0, 00:20:19.451 "state": "online", 00:20:19.451 "raid_level": "raid1", 00:20:19.451 "superblock": true, 00:20:19.451 "num_base_bdevs": 2, 00:20:19.451 "num_base_bdevs_discovered": 1, 00:20:19.451 "num_base_bdevs_operational": 1, 00:20:19.451 "base_bdevs_list": [ 00:20:19.451 { 00:20:19.451 "name": null, 00:20:19.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.451 "is_configured": false, 00:20:19.451 "data_offset": 0, 00:20:19.451 "data_size": 7936 00:20:19.451 }, 00:20:19.451 { 00:20:19.451 "name": "BaseBdev2", 00:20:19.451 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:19.451 "is_configured": true, 00:20:19.451 "data_offset": 256, 00:20:19.451 "data_size": 7936 00:20:19.451 } 00:20:19.451 ] 00:20:19.451 }' 00:20:19.451 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.451 10:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.710 10:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:19.710 10:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.710 10:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.710 [2024-11-15 10:48:50.227048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.710 [2024-11-15 10:48:50.242286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:19.710 10:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.710 10:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:19.711 [2024-11-15 10:48:50.244583] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.090 "name": "raid_bdev1", 00:20:21.090 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:21.090 "strip_size_kb": 0, 00:20:21.090 "state": "online", 00:20:21.090 "raid_level": "raid1", 00:20:21.090 "superblock": true, 00:20:21.090 "num_base_bdevs": 2, 00:20:21.090 "num_base_bdevs_discovered": 2, 00:20:21.090 "num_base_bdevs_operational": 2, 00:20:21.090 "process": { 00:20:21.090 "type": "rebuild", 00:20:21.090 "target": "spare", 00:20:21.090 "progress": { 00:20:21.090 "blocks": 2560, 00:20:21.090 "percent": 32 00:20:21.090 } 00:20:21.090 }, 00:20:21.090 "base_bdevs_list": [ 00:20:21.090 { 00:20:21.090 "name": "spare", 00:20:21.090 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:21.090 "is_configured": true, 00:20:21.090 "data_offset": 256, 00:20:21.090 "data_size": 7936 00:20:21.090 }, 00:20:21.090 { 00:20:21.090 "name": "BaseBdev2", 00:20:21.090 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:21.090 "is_configured": true, 00:20:21.090 "data_offset": 256, 00:20:21.090 "data_size": 7936 00:20:21.090 } 00:20:21.090 ] 00:20:21.090 }' 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.090 [2024-11-15 10:48:51.410044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.090 [2024-11-15 10:48:51.451573] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:21.090 [2024-11-15 10:48:51.451689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.090 [2024-11-15 10:48:51.451714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.090 [2024-11-15 10:48:51.451729] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.090 "name": "raid_bdev1", 00:20:21.090 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:21.090 "strip_size_kb": 0, 00:20:21.090 "state": "online", 00:20:21.090 "raid_level": "raid1", 00:20:21.090 "superblock": true, 00:20:21.090 "num_base_bdevs": 2, 00:20:21.090 "num_base_bdevs_discovered": 1, 00:20:21.090 "num_base_bdevs_operational": 1, 00:20:21.090 "base_bdevs_list": [ 00:20:21.090 { 00:20:21.090 "name": null, 00:20:21.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.090 "is_configured": false, 00:20:21.090 "data_offset": 0, 00:20:21.090 "data_size": 7936 00:20:21.090 }, 00:20:21.090 { 00:20:21.090 "name": "BaseBdev2", 00:20:21.090 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:21.090 "is_configured": true, 00:20:21.090 "data_offset": 256, 00:20:21.090 "data_size": 7936 00:20:21.090 } 00:20:21.090 ] 00:20:21.090 }' 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.090 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.658 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.658 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.658 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.658 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.658 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.658 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.658 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.658 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.658 10:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.658 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.658 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.658 "name": "raid_bdev1", 00:20:21.658 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:21.658 "strip_size_kb": 0, 00:20:21.658 "state": "online", 00:20:21.658 "raid_level": "raid1", 00:20:21.658 "superblock": true, 00:20:21.658 "num_base_bdevs": 2, 00:20:21.658 "num_base_bdevs_discovered": 1, 00:20:21.658 "num_base_bdevs_operational": 1, 00:20:21.658 "base_bdevs_list": [ 00:20:21.658 { 00:20:21.658 "name": null, 00:20:21.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.658 "is_configured": false, 00:20:21.658 "data_offset": 0, 00:20:21.658 "data_size": 7936 00:20:21.658 }, 00:20:21.658 { 00:20:21.658 "name": "BaseBdev2", 00:20:21.658 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:21.658 "is_configured": true, 00:20:21.658 "data_offset": 256, 00:20:21.658 "data_size": 7936 00:20:21.658 } 00:20:21.658 ] 00:20:21.658 }' 00:20:21.658 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.658 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:21.658 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.658 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:21.658 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:21.658 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.658 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.658 [2024-11-15 10:48:52.154704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:21.658 [2024-11-15 10:48:52.168761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:21.658 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.659 10:48:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:21.659 [2024-11-15 10:48:52.171117] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.033 "name": "raid_bdev1", 00:20:23.033 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:23.033 "strip_size_kb": 0, 00:20:23.033 "state": "online", 00:20:23.033 "raid_level": "raid1", 00:20:23.033 "superblock": true, 00:20:23.033 "num_base_bdevs": 2, 00:20:23.033 "num_base_bdevs_discovered": 2, 00:20:23.033 "num_base_bdevs_operational": 2, 00:20:23.033 "process": { 00:20:23.033 "type": "rebuild", 00:20:23.033 "target": "spare", 00:20:23.033 "progress": { 00:20:23.033 "blocks": 2560, 00:20:23.033 "percent": 32 00:20:23.033 } 00:20:23.033 }, 00:20:23.033 "base_bdevs_list": [ 00:20:23.033 { 00:20:23.033 "name": "spare", 00:20:23.033 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:23.033 "is_configured": true, 00:20:23.033 "data_offset": 256, 00:20:23.033 "data_size": 7936 00:20:23.033 }, 00:20:23.033 { 00:20:23.033 "name": "BaseBdev2", 00:20:23.033 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:23.033 "is_configured": true, 00:20:23.033 "data_offset": 256, 00:20:23.033 "data_size": 7936 00:20:23.033 } 00:20:23.033 ] 00:20:23.033 }' 00:20:23.033 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:23.034 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=727 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.034 "name": "raid_bdev1", 00:20:23.034 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:23.034 "strip_size_kb": 0, 00:20:23.034 "state": "online", 00:20:23.034 "raid_level": "raid1", 00:20:23.034 "superblock": true, 00:20:23.034 "num_base_bdevs": 2, 00:20:23.034 "num_base_bdevs_discovered": 2, 00:20:23.034 "num_base_bdevs_operational": 2, 00:20:23.034 "process": { 00:20:23.034 "type": "rebuild", 00:20:23.034 "target": "spare", 00:20:23.034 "progress": { 00:20:23.034 "blocks": 2816, 00:20:23.034 "percent": 35 00:20:23.034 } 00:20:23.034 }, 00:20:23.034 "base_bdevs_list": [ 00:20:23.034 { 00:20:23.034 "name": "spare", 00:20:23.034 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:23.034 "is_configured": true, 00:20:23.034 "data_offset": 256, 00:20:23.034 "data_size": 7936 00:20:23.034 }, 00:20:23.034 { 00:20:23.034 "name": "BaseBdev2", 00:20:23.034 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:23.034 "is_configured": true, 00:20:23.034 "data_offset": 256, 00:20:23.034 "data_size": 7936 00:20:23.034 } 00:20:23.034 ] 00:20:23.034 }' 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.034 10:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.067 "name": "raid_bdev1", 00:20:24.067 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:24.067 "strip_size_kb": 0, 00:20:24.067 "state": "online", 00:20:24.067 "raid_level": "raid1", 00:20:24.067 "superblock": true, 00:20:24.067 "num_base_bdevs": 2, 00:20:24.067 "num_base_bdevs_discovered": 2, 00:20:24.067 "num_base_bdevs_operational": 2, 00:20:24.067 "process": { 00:20:24.067 "type": "rebuild", 00:20:24.067 "target": "spare", 00:20:24.067 "progress": { 00:20:24.067 "blocks": 5888, 00:20:24.067 "percent": 74 00:20:24.067 } 00:20:24.067 }, 00:20:24.067 "base_bdevs_list": [ 00:20:24.067 { 00:20:24.067 "name": "spare", 00:20:24.067 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:24.067 "is_configured": true, 00:20:24.067 "data_offset": 256, 00:20:24.067 "data_size": 7936 00:20:24.067 }, 00:20:24.067 { 00:20:24.067 "name": "BaseBdev2", 00:20:24.067 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:24.067 "is_configured": true, 00:20:24.067 "data_offset": 256, 00:20:24.067 "data_size": 7936 00:20:24.067 } 00:20:24.067 ] 00:20:24.067 }' 00:20:24.067 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.328 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:24.328 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.328 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.328 10:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:24.897 [2024-11-15 10:48:55.289461] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:24.897 [2024-11-15 10:48:55.289574] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:24.897 [2024-11-15 10:48:55.289802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.156 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:25.156 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.156 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.156 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:25.156 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:25.156 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.416 "name": "raid_bdev1", 00:20:25.416 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:25.416 "strip_size_kb": 0, 00:20:25.416 "state": "online", 00:20:25.416 "raid_level": "raid1", 00:20:25.416 "superblock": true, 00:20:25.416 "num_base_bdevs": 2, 00:20:25.416 "num_base_bdevs_discovered": 2, 00:20:25.416 "num_base_bdevs_operational": 2, 00:20:25.416 "base_bdevs_list": [ 00:20:25.416 { 00:20:25.416 "name": "spare", 00:20:25.416 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:25.416 "is_configured": true, 00:20:25.416 "data_offset": 256, 00:20:25.416 "data_size": 7936 00:20:25.416 }, 00:20:25.416 { 00:20:25.416 "name": "BaseBdev2", 00:20:25.416 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:25.416 "is_configured": true, 00:20:25.416 "data_offset": 256, 00:20:25.416 "data_size": 7936 00:20:25.416 } 00:20:25.416 ] 00:20:25.416 }' 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.416 "name": "raid_bdev1", 00:20:25.416 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:25.416 "strip_size_kb": 0, 00:20:25.416 "state": "online", 00:20:25.416 "raid_level": "raid1", 00:20:25.416 "superblock": true, 00:20:25.416 "num_base_bdevs": 2, 00:20:25.416 "num_base_bdevs_discovered": 2, 00:20:25.416 "num_base_bdevs_operational": 2, 00:20:25.416 "base_bdevs_list": [ 00:20:25.416 { 00:20:25.416 "name": "spare", 00:20:25.416 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:25.416 "is_configured": true, 00:20:25.416 "data_offset": 256, 00:20:25.416 "data_size": 7936 00:20:25.416 }, 00:20:25.416 { 00:20:25.416 "name": "BaseBdev2", 00:20:25.416 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:25.416 "is_configured": true, 00:20:25.416 "data_offset": 256, 00:20:25.416 "data_size": 7936 00:20:25.416 } 00:20:25.416 ] 00:20:25.416 }' 00:20:25.416 10:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.674 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:25.674 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.674 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:25.674 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:25.674 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.674 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.674 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.674 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.674 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.675 "name": "raid_bdev1", 00:20:25.675 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:25.675 "strip_size_kb": 0, 00:20:25.675 "state": "online", 00:20:25.675 "raid_level": "raid1", 00:20:25.675 "superblock": true, 00:20:25.675 "num_base_bdevs": 2, 00:20:25.675 "num_base_bdevs_discovered": 2, 00:20:25.675 "num_base_bdevs_operational": 2, 00:20:25.675 "base_bdevs_list": [ 00:20:25.675 { 00:20:25.675 "name": "spare", 00:20:25.675 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:25.675 "is_configured": true, 00:20:25.675 "data_offset": 256, 00:20:25.675 "data_size": 7936 00:20:25.675 }, 00:20:25.675 { 00:20:25.675 "name": "BaseBdev2", 00:20:25.675 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:25.675 "is_configured": true, 00:20:25.675 "data_offset": 256, 00:20:25.675 "data_size": 7936 00:20:25.675 } 00:20:25.675 ] 00:20:25.675 }' 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.675 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.241 [2024-11-15 10:48:56.673023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:26.241 [2024-11-15 10:48:56.673243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.241 [2024-11-15 10:48:56.673396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.241 [2024-11-15 10:48:56.673497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.241 [2024-11-15 10:48:56.673518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:26.241 10:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:26.808 /dev/nbd0 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.808 1+0 records in 00:20:26.808 1+0 records out 00:20:26.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277643 s, 14.8 MB/s 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:26.808 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:27.067 /dev/nbd1 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:27.067 1+0 records in 00:20:27.067 1+0 records out 00:20:27.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481545 s, 8.5 MB/s 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:27.067 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:27.634 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:27.635 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:27.635 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:27.635 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:27.635 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:27.635 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:27.635 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:27.635 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:27.635 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:27.635 10:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:27.635 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:27.635 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:27.635 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:27.635 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:27.635 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:27.635 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:27.635 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:27.635 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.893 [2024-11-15 10:48:58.207123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.893 [2024-11-15 10:48:58.207198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.893 [2024-11-15 10:48:58.207237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:27.893 [2024-11-15 10:48:58.207254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.893 [2024-11-15 10:48:58.210014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.893 [2024-11-15 10:48:58.210061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.893 [2024-11-15 10:48:58.210189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:27.893 [2024-11-15 10:48:58.210255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.893 [2024-11-15 10:48:58.210481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.893 spare 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.893 [2024-11-15 10:48:58.310638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:27.893 [2024-11-15 10:48:58.310710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:27.893 [2024-11-15 10:48:58.311139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:27.893 [2024-11-15 10:48:58.311449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:27.893 [2024-11-15 10:48:58.311478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:27.893 [2024-11-15 10:48:58.311748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.893 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.894 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.894 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.894 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.894 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.894 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.894 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.894 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.894 "name": "raid_bdev1", 00:20:27.894 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:27.894 "strip_size_kb": 0, 00:20:27.894 "state": "online", 00:20:27.894 "raid_level": "raid1", 00:20:27.894 "superblock": true, 00:20:27.894 "num_base_bdevs": 2, 00:20:27.894 "num_base_bdevs_discovered": 2, 00:20:27.894 "num_base_bdevs_operational": 2, 00:20:27.894 "base_bdevs_list": [ 00:20:27.894 { 00:20:27.894 "name": "spare", 00:20:27.894 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:27.894 "is_configured": true, 00:20:27.894 "data_offset": 256, 00:20:27.894 "data_size": 7936 00:20:27.894 }, 00:20:27.894 { 00:20:27.894 "name": "BaseBdev2", 00:20:27.894 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:27.894 "is_configured": true, 00:20:27.894 "data_offset": 256, 00:20:27.894 "data_size": 7936 00:20:27.894 } 00:20:27.894 ] 00:20:27.894 }' 00:20:27.894 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.894 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.460 "name": "raid_bdev1", 00:20:28.460 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:28.460 "strip_size_kb": 0, 00:20:28.460 "state": "online", 00:20:28.460 "raid_level": "raid1", 00:20:28.460 "superblock": true, 00:20:28.460 "num_base_bdevs": 2, 00:20:28.460 "num_base_bdevs_discovered": 2, 00:20:28.460 "num_base_bdevs_operational": 2, 00:20:28.460 "base_bdevs_list": [ 00:20:28.460 { 00:20:28.460 "name": "spare", 00:20:28.460 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:28.460 "is_configured": true, 00:20:28.460 "data_offset": 256, 00:20:28.460 "data_size": 7936 00:20:28.460 }, 00:20:28.460 { 00:20:28.460 "name": "BaseBdev2", 00:20:28.460 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:28.460 "is_configured": true, 00:20:28.460 "data_offset": 256, 00:20:28.460 "data_size": 7936 00:20:28.460 } 00:20:28.460 ] 00:20:28.460 }' 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.460 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.460 10:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:28.460 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.460 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.719 [2024-11-15 10:48:59.059922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.719 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.720 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.720 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.720 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.720 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.720 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.720 "name": "raid_bdev1", 00:20:28.720 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:28.720 "strip_size_kb": 0, 00:20:28.720 "state": "online", 00:20:28.720 "raid_level": "raid1", 00:20:28.720 "superblock": true, 00:20:28.720 "num_base_bdevs": 2, 00:20:28.720 "num_base_bdevs_discovered": 1, 00:20:28.720 "num_base_bdevs_operational": 1, 00:20:28.720 "base_bdevs_list": [ 00:20:28.720 { 00:20:28.720 "name": null, 00:20:28.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.720 "is_configured": false, 00:20:28.720 "data_offset": 0, 00:20:28.720 "data_size": 7936 00:20:28.720 }, 00:20:28.720 { 00:20:28.720 "name": "BaseBdev2", 00:20:28.720 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:28.720 "is_configured": true, 00:20:28.720 "data_offset": 256, 00:20:28.720 "data_size": 7936 00:20:28.720 } 00:20:28.720 ] 00:20:28.720 }' 00:20:28.720 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.720 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.286 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:29.286 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.286 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.286 [2024-11-15 10:48:59.584181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.286 [2024-11-15 10:48:59.584450] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:29.286 [2024-11-15 10:48:59.584500] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:29.286 [2024-11-15 10:48:59.584545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.286 [2024-11-15 10:48:59.598584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:29.286 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.286 10:48:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:29.286 [2024-11-15 10:48:59.600964] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.288 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.288 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.289 "name": "raid_bdev1", 00:20:30.289 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:30.289 "strip_size_kb": 0, 00:20:30.289 "state": "online", 00:20:30.289 "raid_level": "raid1", 00:20:30.289 "superblock": true, 00:20:30.289 "num_base_bdevs": 2, 00:20:30.289 "num_base_bdevs_discovered": 2, 00:20:30.289 "num_base_bdevs_operational": 2, 00:20:30.289 "process": { 00:20:30.289 "type": "rebuild", 00:20:30.289 "target": "spare", 00:20:30.289 "progress": { 00:20:30.289 "blocks": 2560, 00:20:30.289 "percent": 32 00:20:30.289 } 00:20:30.289 }, 00:20:30.289 "base_bdevs_list": [ 00:20:30.289 { 00:20:30.289 "name": "spare", 00:20:30.289 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:30.289 "is_configured": true, 00:20:30.289 "data_offset": 256, 00:20:30.289 "data_size": 7936 00:20:30.289 }, 00:20:30.289 { 00:20:30.289 "name": "BaseBdev2", 00:20:30.289 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:30.289 "is_configured": true, 00:20:30.289 "data_offset": 256, 00:20:30.289 "data_size": 7936 00:20:30.289 } 00:20:30.289 ] 00:20:30.289 }' 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.289 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.289 [2024-11-15 10:49:00.782912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.289 [2024-11-15 10:49:00.808429] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:30.289 [2024-11-15 10:49:00.808591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.289 [2024-11-15 10:49:00.808631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.289 [2024-11-15 10:49:00.808657] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:30.561 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.561 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.561 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.561 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.561 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.561 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.561 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:30.561 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.561 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.562 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.562 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.562 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.562 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.562 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.562 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.562 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.562 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.562 "name": "raid_bdev1", 00:20:30.562 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:30.562 "strip_size_kb": 0, 00:20:30.562 "state": "online", 00:20:30.562 "raid_level": "raid1", 00:20:30.562 "superblock": true, 00:20:30.562 "num_base_bdevs": 2, 00:20:30.562 "num_base_bdevs_discovered": 1, 00:20:30.562 "num_base_bdevs_operational": 1, 00:20:30.562 "base_bdevs_list": [ 00:20:30.562 { 00:20:30.562 "name": null, 00:20:30.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.562 "is_configured": false, 00:20:30.562 "data_offset": 0, 00:20:30.562 "data_size": 7936 00:20:30.562 }, 00:20:30.562 { 00:20:30.562 "name": "BaseBdev2", 00:20:30.562 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:30.562 "is_configured": true, 00:20:30.562 "data_offset": 256, 00:20:30.562 "data_size": 7936 00:20:30.562 } 00:20:30.562 ] 00:20:30.562 }' 00:20:30.562 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.562 10:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.128 10:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:31.128 10:49:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.128 10:49:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.128 [2024-11-15 10:49:01.385013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:31.128 [2024-11-15 10:49:01.385110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.128 [2024-11-15 10:49:01.385147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:31.128 [2024-11-15 10:49:01.385165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.128 [2024-11-15 10:49:01.385796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.128 [2024-11-15 10:49:01.385849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:31.128 [2024-11-15 10:49:01.385974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:31.128 [2024-11-15 10:49:01.386001] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:31.128 [2024-11-15 10:49:01.386016] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:31.128 [2024-11-15 10:49:01.386052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:31.128 [2024-11-15 10:49:01.400142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:31.128 spare 00:20:31.128 10:49:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.128 10:49:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:31.128 [2024-11-15 10:49:01.402508] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.065 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.065 "name": "raid_bdev1", 00:20:32.065 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:32.065 "strip_size_kb": 0, 00:20:32.065 "state": "online", 00:20:32.065 "raid_level": "raid1", 00:20:32.065 "superblock": true, 00:20:32.065 "num_base_bdevs": 2, 00:20:32.065 "num_base_bdevs_discovered": 2, 00:20:32.065 "num_base_bdevs_operational": 2, 00:20:32.065 "process": { 00:20:32.065 "type": "rebuild", 00:20:32.065 "target": "spare", 00:20:32.065 "progress": { 00:20:32.065 "blocks": 2560, 00:20:32.065 "percent": 32 00:20:32.065 } 00:20:32.065 }, 00:20:32.065 "base_bdevs_list": [ 00:20:32.065 { 00:20:32.065 "name": "spare", 00:20:32.065 "uuid": "ef09f12c-f69d-5fd5-af48-f298ae0fc2f7", 00:20:32.065 "is_configured": true, 00:20:32.065 "data_offset": 256, 00:20:32.065 "data_size": 7936 00:20:32.065 }, 00:20:32.065 { 00:20:32.065 "name": "BaseBdev2", 00:20:32.065 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:32.065 "is_configured": true, 00:20:32.065 "data_offset": 256, 00:20:32.065 "data_size": 7936 00:20:32.066 } 00:20:32.066 ] 00:20:32.066 }' 00:20:32.066 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.066 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.066 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.066 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.066 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:32.066 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.066 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:32.066 [2024-11-15 10:49:02.572707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.066 [2024-11-15 10:49:02.610163] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:32.066 [2024-11-15 10:49:02.610261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.066 [2024-11-15 10:49:02.610290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.066 [2024-11-15 10:49:02.610302] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.325 "name": "raid_bdev1", 00:20:32.325 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:32.325 "strip_size_kb": 0, 00:20:32.325 "state": "online", 00:20:32.325 "raid_level": "raid1", 00:20:32.325 "superblock": true, 00:20:32.325 "num_base_bdevs": 2, 00:20:32.325 "num_base_bdevs_discovered": 1, 00:20:32.325 "num_base_bdevs_operational": 1, 00:20:32.325 "base_bdevs_list": [ 00:20:32.325 { 00:20:32.325 "name": null, 00:20:32.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.325 "is_configured": false, 00:20:32.325 "data_offset": 0, 00:20:32.325 "data_size": 7936 00:20:32.325 }, 00:20:32.325 { 00:20:32.325 "name": "BaseBdev2", 00:20:32.325 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:32.325 "is_configured": true, 00:20:32.325 "data_offset": 256, 00:20:32.325 "data_size": 7936 00:20:32.325 } 00:20:32.325 ] 00:20:32.325 }' 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.325 10:49:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.894 "name": "raid_bdev1", 00:20:32.894 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:32.894 "strip_size_kb": 0, 00:20:32.894 "state": "online", 00:20:32.894 "raid_level": "raid1", 00:20:32.894 "superblock": true, 00:20:32.894 "num_base_bdevs": 2, 00:20:32.894 "num_base_bdevs_discovered": 1, 00:20:32.894 "num_base_bdevs_operational": 1, 00:20:32.894 "base_bdevs_list": [ 00:20:32.894 { 00:20:32.894 "name": null, 00:20:32.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.894 "is_configured": false, 00:20:32.894 "data_offset": 0, 00:20:32.894 "data_size": 7936 00:20:32.894 }, 00:20:32.894 { 00:20:32.894 "name": "BaseBdev2", 00:20:32.894 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:32.894 "is_configured": true, 00:20:32.894 "data_offset": 256, 00:20:32.894 "data_size": 7936 00:20:32.894 } 00:20:32.894 ] 00:20:32.894 }' 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:32.894 [2024-11-15 10:49:03.344976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:32.894 [2024-11-15 10:49:03.345047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.894 [2024-11-15 10:49:03.345090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:32.894 [2024-11-15 10:49:03.345123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.894 [2024-11-15 10:49:03.345715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.894 [2024-11-15 10:49:03.345752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:32.894 [2024-11-15 10:49:03.345860] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:32.894 [2024-11-15 10:49:03.345883] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:32.894 [2024-11-15 10:49:03.345899] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:32.894 [2024-11-15 10:49:03.345913] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:32.894 BaseBdev1 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.894 10:49:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.831 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.832 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.832 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.832 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:33.832 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.091 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.091 "name": "raid_bdev1", 00:20:34.091 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:34.091 "strip_size_kb": 0, 00:20:34.091 "state": "online", 00:20:34.091 "raid_level": "raid1", 00:20:34.091 "superblock": true, 00:20:34.091 "num_base_bdevs": 2, 00:20:34.091 "num_base_bdevs_discovered": 1, 00:20:34.091 "num_base_bdevs_operational": 1, 00:20:34.091 "base_bdevs_list": [ 00:20:34.091 { 00:20:34.091 "name": null, 00:20:34.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.091 "is_configured": false, 00:20:34.091 "data_offset": 0, 00:20:34.091 "data_size": 7936 00:20:34.091 }, 00:20:34.091 { 00:20:34.091 "name": "BaseBdev2", 00:20:34.091 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:34.091 "is_configured": true, 00:20:34.091 "data_offset": 256, 00:20:34.091 "data_size": 7936 00:20:34.091 } 00:20:34.091 ] 00:20:34.091 }' 00:20:34.091 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.091 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.348 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.348 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.348 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:34.348 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:34.348 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.349 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.349 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.349 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.349 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.349 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.606 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.606 "name": "raid_bdev1", 00:20:34.606 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:34.606 "strip_size_kb": 0, 00:20:34.606 "state": "online", 00:20:34.606 "raid_level": "raid1", 00:20:34.606 "superblock": true, 00:20:34.606 "num_base_bdevs": 2, 00:20:34.606 "num_base_bdevs_discovered": 1, 00:20:34.606 "num_base_bdevs_operational": 1, 00:20:34.606 "base_bdevs_list": [ 00:20:34.606 { 00:20:34.606 "name": null, 00:20:34.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.606 "is_configured": false, 00:20:34.606 "data_offset": 0, 00:20:34.606 "data_size": 7936 00:20:34.606 }, 00:20:34.606 { 00:20:34.606 "name": "BaseBdev2", 00:20:34.606 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:34.606 "is_configured": true, 00:20:34.606 "data_offset": 256, 00:20:34.606 "data_size": 7936 00:20:34.606 } 00:20:34.606 ] 00:20:34.606 }' 00:20:34.606 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.606 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:34.606 10:49:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.606 [2024-11-15 10:49:05.013616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.606 [2024-11-15 10:49:05.013892] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:34.606 [2024-11-15 10:49:05.013935] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:34.606 request: 00:20:34.606 { 00:20:34.606 "base_bdev": "BaseBdev1", 00:20:34.606 "raid_bdev": "raid_bdev1", 00:20:34.606 "method": "bdev_raid_add_base_bdev", 00:20:34.606 "req_id": 1 00:20:34.606 } 00:20:34.606 Got JSON-RPC error response 00:20:34.606 response: 00:20:34.606 { 00:20:34.606 "code": -22, 00:20:34.606 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:34.606 } 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:34.606 10:49:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.585 "name": "raid_bdev1", 00:20:35.585 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:35.585 "strip_size_kb": 0, 00:20:35.585 "state": "online", 00:20:35.585 "raid_level": "raid1", 00:20:35.585 "superblock": true, 00:20:35.585 "num_base_bdevs": 2, 00:20:35.585 "num_base_bdevs_discovered": 1, 00:20:35.585 "num_base_bdevs_operational": 1, 00:20:35.585 "base_bdevs_list": [ 00:20:35.585 { 00:20:35.585 "name": null, 00:20:35.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.585 "is_configured": false, 00:20:35.585 "data_offset": 0, 00:20:35.585 "data_size": 7936 00:20:35.585 }, 00:20:35.585 { 00:20:35.585 "name": "BaseBdev2", 00:20:35.585 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:35.585 "is_configured": true, 00:20:35.585 "data_offset": 256, 00:20:35.585 "data_size": 7936 00:20:35.585 } 00:20:35.585 ] 00:20:35.585 }' 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.585 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.153 "name": "raid_bdev1", 00:20:36.153 "uuid": "115ed356-81ac-406d-a8df-b85c969eb7f4", 00:20:36.153 "strip_size_kb": 0, 00:20:36.153 "state": "online", 00:20:36.153 "raid_level": "raid1", 00:20:36.153 "superblock": true, 00:20:36.153 "num_base_bdevs": 2, 00:20:36.153 "num_base_bdevs_discovered": 1, 00:20:36.153 "num_base_bdevs_operational": 1, 00:20:36.153 "base_bdevs_list": [ 00:20:36.153 { 00:20:36.153 "name": null, 00:20:36.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.153 "is_configured": false, 00:20:36.153 "data_offset": 0, 00:20:36.153 "data_size": 7936 00:20:36.153 }, 00:20:36.153 { 00:20:36.153 "name": "BaseBdev2", 00:20:36.153 "uuid": "3e6bb2df-7b25-569d-8a83-6878e42ada5a", 00:20:36.153 "is_configured": true, 00:20:36.153 "data_offset": 256, 00:20:36.153 "data_size": 7936 00:20:36.153 } 00:20:36.153 ] 00:20:36.153 }' 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87097 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 87097 ']' 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 87097 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:20:36.153 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.154 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87097 00:20:36.412 killing process with pid 87097 00:20:36.412 Received shutdown signal, test time was about 60.000000 seconds 00:20:36.412 00:20:36.412 Latency(us) 00:20:36.412 [2024-11-15T10:49:06.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.412 [2024-11-15T10:49:06.972Z] =================================================================================================================== 00:20:36.412 [2024-11-15T10:49:06.972Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.413 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:36.413 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:36.413 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87097' 00:20:36.413 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 87097 00:20:36.413 [2024-11-15 10:49:06.725184] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:36.413 10:49:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 87097 00:20:36.413 [2024-11-15 10:49:06.725361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.413 [2024-11-15 10:49:06.725437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:36.413 [2024-11-15 10:49:06.725459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:36.671 [2024-11-15 10:49:06.981106] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:37.606 10:49:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:20:37.606 00:20:37.606 real 0m21.878s 00:20:37.606 user 0m29.970s 00:20:37.606 sys 0m2.350s 00:20:37.606 ************************************ 00:20:37.606 END TEST raid_rebuild_test_sb_4k 00:20:37.606 ************************************ 00:20:37.606 10:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:37.606 10:49:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:37.606 10:49:08 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:20:37.606 10:49:08 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:20:37.606 10:49:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:37.606 10:49:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:37.606 10:49:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.606 ************************************ 00:20:37.606 START TEST raid_state_function_test_sb_md_separate 00:20:37.606 ************************************ 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87796 00:20:37.606 Process raid pid: 87796 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87796' 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87796 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87796 ']' 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:37.606 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.606 [2024-11-15 10:49:08.129117] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:20:37.606 [2024-11-15 10:49:08.129267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.864 [2024-11-15 10:49:08.301186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.864 [2024-11-15 10:49:08.407853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.123 [2024-11-15 10:49:08.596880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.123 [2024-11-15 10:49:08.596947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.382 [2024-11-15 10:49:08.738838] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:38.382 [2024-11-15 10:49:08.738912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:38.382 [2024-11-15 10:49:08.738929] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:38.382 [2024-11-15 10:49:08.738944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.382 "name": "Existed_Raid", 00:20:38.382 "uuid": "b030a447-3657-4c94-94d0-200e00dbaeb9", 00:20:38.382 "strip_size_kb": 0, 00:20:38.382 "state": "configuring", 00:20:38.382 "raid_level": "raid1", 00:20:38.382 "superblock": true, 00:20:38.382 "num_base_bdevs": 2, 00:20:38.382 "num_base_bdevs_discovered": 0, 00:20:38.382 "num_base_bdevs_operational": 2, 00:20:38.382 "base_bdevs_list": [ 00:20:38.382 { 00:20:38.382 "name": "BaseBdev1", 00:20:38.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.382 "is_configured": false, 00:20:38.382 "data_offset": 0, 00:20:38.382 "data_size": 0 00:20:38.382 }, 00:20:38.382 { 00:20:38.382 "name": "BaseBdev2", 00:20:38.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.382 "is_configured": false, 00:20:38.382 "data_offset": 0, 00:20:38.382 "data_size": 0 00:20:38.382 } 00:20:38.382 ] 00:20:38.382 }' 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.382 10:49:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.949 [2024-11-15 10:49:09.331026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:38.949 [2024-11-15 10:49:09.331096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.949 [2024-11-15 10:49:09.338999] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:38.949 [2024-11-15 10:49:09.339063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:38.949 [2024-11-15 10:49:09.339079] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:38.949 [2024-11-15 10:49:09.339097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.949 BaseBdev1 00:20:38.949 [2024-11-15 10:49:09.380586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.949 [ 00:20:38.949 { 00:20:38.949 "name": "BaseBdev1", 00:20:38.949 "aliases": [ 00:20:38.949 "461964a3-ee44-4980-85da-d6517af24161" 00:20:38.949 ], 00:20:38.949 "product_name": "Malloc disk", 00:20:38.949 "block_size": 4096, 00:20:38.949 "num_blocks": 8192, 00:20:38.949 "uuid": "461964a3-ee44-4980-85da-d6517af24161", 00:20:38.949 "md_size": 32, 00:20:38.949 "md_interleave": false, 00:20:38.949 "dif_type": 0, 00:20:38.949 "assigned_rate_limits": { 00:20:38.949 "rw_ios_per_sec": 0, 00:20:38.949 "rw_mbytes_per_sec": 0, 00:20:38.949 "r_mbytes_per_sec": 0, 00:20:38.949 "w_mbytes_per_sec": 0 00:20:38.949 }, 00:20:38.949 "claimed": true, 00:20:38.949 "claim_type": "exclusive_write", 00:20:38.949 "zoned": false, 00:20:38.949 "supported_io_types": { 00:20:38.949 "read": true, 00:20:38.949 "write": true, 00:20:38.949 "unmap": true, 00:20:38.949 "flush": true, 00:20:38.949 "reset": true, 00:20:38.949 "nvme_admin": false, 00:20:38.949 "nvme_io": false, 00:20:38.949 "nvme_io_md": false, 00:20:38.949 "write_zeroes": true, 00:20:38.949 "zcopy": true, 00:20:38.949 "get_zone_info": false, 00:20:38.949 "zone_management": false, 00:20:38.949 "zone_append": false, 00:20:38.949 "compare": false, 00:20:38.949 "compare_and_write": false, 00:20:38.949 "abort": true, 00:20:38.949 "seek_hole": false, 00:20:38.949 "seek_data": false, 00:20:38.949 "copy": true, 00:20:38.949 "nvme_iov_md": false 00:20:38.949 }, 00:20:38.949 "memory_domains": [ 00:20:38.949 { 00:20:38.949 "dma_device_id": "system", 00:20:38.949 "dma_device_type": 1 00:20:38.949 }, 00:20:38.949 { 00:20:38.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.949 "dma_device_type": 2 00:20:38.949 } 00:20:38.949 ], 00:20:38.949 "driver_specific": {} 00:20:38.949 } 00:20:38.949 ] 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.949 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.949 "name": "Existed_Raid", 00:20:38.949 "uuid": "fda9c294-b318-4fde-877c-4b3eb3f30f78", 00:20:38.949 "strip_size_kb": 0, 00:20:38.949 "state": "configuring", 00:20:38.949 "raid_level": "raid1", 00:20:38.949 "superblock": true, 00:20:38.949 "num_base_bdevs": 2, 00:20:38.949 "num_base_bdevs_discovered": 1, 00:20:38.949 "num_base_bdevs_operational": 2, 00:20:38.949 "base_bdevs_list": [ 00:20:38.949 { 00:20:38.949 "name": "BaseBdev1", 00:20:38.949 "uuid": "461964a3-ee44-4980-85da-d6517af24161", 00:20:38.949 "is_configured": true, 00:20:38.949 "data_offset": 256, 00:20:38.949 "data_size": 7936 00:20:38.949 }, 00:20:38.949 { 00:20:38.949 "name": "BaseBdev2", 00:20:38.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.949 "is_configured": false, 00:20:38.949 "data_offset": 0, 00:20:38.950 "data_size": 0 00:20:38.950 } 00:20:38.950 ] 00:20:38.950 }' 00:20:38.950 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.950 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.516 [2024-11-15 10:49:09.984943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:39.516 [2024-11-15 10:49:09.985014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.516 [2024-11-15 10:49:09.992981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.516 [2024-11-15 10:49:09.995266] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:39.516 [2024-11-15 10:49:09.995324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.516 10:49:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.516 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.516 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.516 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.516 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.516 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.516 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.516 "name": "Existed_Raid", 00:20:39.516 "uuid": "1c410086-b2cc-41eb-ba70-bf60fdb23bb0", 00:20:39.516 "strip_size_kb": 0, 00:20:39.516 "state": "configuring", 00:20:39.516 "raid_level": "raid1", 00:20:39.516 "superblock": true, 00:20:39.516 "num_base_bdevs": 2, 00:20:39.516 "num_base_bdevs_discovered": 1, 00:20:39.516 "num_base_bdevs_operational": 2, 00:20:39.517 "base_bdevs_list": [ 00:20:39.517 { 00:20:39.517 "name": "BaseBdev1", 00:20:39.517 "uuid": "461964a3-ee44-4980-85da-d6517af24161", 00:20:39.517 "is_configured": true, 00:20:39.517 "data_offset": 256, 00:20:39.517 "data_size": 7936 00:20:39.517 }, 00:20:39.517 { 00:20:39.517 "name": "BaseBdev2", 00:20:39.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.517 "is_configured": false, 00:20:39.517 "data_offset": 0, 00:20:39.517 "data_size": 0 00:20:39.517 } 00:20:39.517 ] 00:20:39.517 }' 00:20:39.517 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.517 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.083 [2024-11-15 10:49:10.616758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:40.083 [2024-11-15 10:49:10.617057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:40.083 [2024-11-15 10:49:10.617080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:40.083 [2024-11-15 10:49:10.617176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:40.083 [2024-11-15 10:49:10.617342] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:40.083 [2024-11-15 10:49:10.617383] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:40.083 [2024-11-15 10:49:10.617505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.083 BaseBdev2 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.083 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.083 [ 00:20:40.083 { 00:20:40.083 "name": "BaseBdev2", 00:20:40.083 "aliases": [ 00:20:40.083 "77870f80-db93-49b6-9765-3c36222c36fc" 00:20:40.083 ], 00:20:40.083 "product_name": "Malloc disk", 00:20:40.083 "block_size": 4096, 00:20:40.083 "num_blocks": 8192, 00:20:40.083 "uuid": "77870f80-db93-49b6-9765-3c36222c36fc", 00:20:40.083 "md_size": 32, 00:20:40.083 "md_interleave": false, 00:20:40.083 "dif_type": 0, 00:20:40.083 "assigned_rate_limits": { 00:20:40.083 "rw_ios_per_sec": 0, 00:20:40.083 "rw_mbytes_per_sec": 0, 00:20:40.083 "r_mbytes_per_sec": 0, 00:20:40.083 "w_mbytes_per_sec": 0 00:20:40.083 }, 00:20:40.083 "claimed": true, 00:20:40.342 "claim_type": "exclusive_write", 00:20:40.342 "zoned": false, 00:20:40.342 "supported_io_types": { 00:20:40.342 "read": true, 00:20:40.342 "write": true, 00:20:40.342 "unmap": true, 00:20:40.342 "flush": true, 00:20:40.342 "reset": true, 00:20:40.342 "nvme_admin": false, 00:20:40.342 "nvme_io": false, 00:20:40.342 "nvme_io_md": false, 00:20:40.342 "write_zeroes": true, 00:20:40.342 "zcopy": true, 00:20:40.342 "get_zone_info": false, 00:20:40.342 "zone_management": false, 00:20:40.342 "zone_append": false, 00:20:40.342 "compare": false, 00:20:40.342 "compare_and_write": false, 00:20:40.342 "abort": true, 00:20:40.342 "seek_hole": false, 00:20:40.342 "seek_data": false, 00:20:40.342 "copy": true, 00:20:40.342 "nvme_iov_md": false 00:20:40.342 }, 00:20:40.342 "memory_domains": [ 00:20:40.342 { 00:20:40.342 "dma_device_id": "system", 00:20:40.342 "dma_device_type": 1 00:20:40.342 }, 00:20:40.342 { 00:20:40.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.342 "dma_device_type": 2 00:20:40.342 } 00:20:40.342 ], 00:20:40.342 "driver_specific": {} 00:20:40.342 } 00:20:40.342 ] 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.342 "name": "Existed_Raid", 00:20:40.342 "uuid": "1c410086-b2cc-41eb-ba70-bf60fdb23bb0", 00:20:40.342 "strip_size_kb": 0, 00:20:40.342 "state": "online", 00:20:40.342 "raid_level": "raid1", 00:20:40.342 "superblock": true, 00:20:40.342 "num_base_bdevs": 2, 00:20:40.342 "num_base_bdevs_discovered": 2, 00:20:40.342 "num_base_bdevs_operational": 2, 00:20:40.342 "base_bdevs_list": [ 00:20:40.342 { 00:20:40.342 "name": "BaseBdev1", 00:20:40.342 "uuid": "461964a3-ee44-4980-85da-d6517af24161", 00:20:40.342 "is_configured": true, 00:20:40.342 "data_offset": 256, 00:20:40.342 "data_size": 7936 00:20:40.342 }, 00:20:40.342 { 00:20:40.342 "name": "BaseBdev2", 00:20:40.342 "uuid": "77870f80-db93-49b6-9765-3c36222c36fc", 00:20:40.342 "is_configured": true, 00:20:40.342 "data_offset": 256, 00:20:40.342 "data_size": 7936 00:20:40.342 } 00:20:40.342 ] 00:20:40.342 }' 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.342 10:49:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.601 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:40.601 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:40.601 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:40.601 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:40.601 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:40.601 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:40.601 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:40.601 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:40.601 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.601 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.860 [2024-11-15 10:49:11.161411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:40.860 "name": "Existed_Raid", 00:20:40.860 "aliases": [ 00:20:40.860 "1c410086-b2cc-41eb-ba70-bf60fdb23bb0" 00:20:40.860 ], 00:20:40.860 "product_name": "Raid Volume", 00:20:40.860 "block_size": 4096, 00:20:40.860 "num_blocks": 7936, 00:20:40.860 "uuid": "1c410086-b2cc-41eb-ba70-bf60fdb23bb0", 00:20:40.860 "md_size": 32, 00:20:40.860 "md_interleave": false, 00:20:40.860 "dif_type": 0, 00:20:40.860 "assigned_rate_limits": { 00:20:40.860 "rw_ios_per_sec": 0, 00:20:40.860 "rw_mbytes_per_sec": 0, 00:20:40.860 "r_mbytes_per_sec": 0, 00:20:40.860 "w_mbytes_per_sec": 0 00:20:40.860 }, 00:20:40.860 "claimed": false, 00:20:40.860 "zoned": false, 00:20:40.860 "supported_io_types": { 00:20:40.860 "read": true, 00:20:40.860 "write": true, 00:20:40.860 "unmap": false, 00:20:40.860 "flush": false, 00:20:40.860 "reset": true, 00:20:40.860 "nvme_admin": false, 00:20:40.860 "nvme_io": false, 00:20:40.860 "nvme_io_md": false, 00:20:40.860 "write_zeroes": true, 00:20:40.860 "zcopy": false, 00:20:40.860 "get_zone_info": false, 00:20:40.860 "zone_management": false, 00:20:40.860 "zone_append": false, 00:20:40.860 "compare": false, 00:20:40.860 "compare_and_write": false, 00:20:40.860 "abort": false, 00:20:40.860 "seek_hole": false, 00:20:40.860 "seek_data": false, 00:20:40.860 "copy": false, 00:20:40.860 "nvme_iov_md": false 00:20:40.860 }, 00:20:40.860 "memory_domains": [ 00:20:40.860 { 00:20:40.860 "dma_device_id": "system", 00:20:40.860 "dma_device_type": 1 00:20:40.860 }, 00:20:40.860 { 00:20:40.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.860 "dma_device_type": 2 00:20:40.860 }, 00:20:40.860 { 00:20:40.860 "dma_device_id": "system", 00:20:40.860 "dma_device_type": 1 00:20:40.860 }, 00:20:40.860 { 00:20:40.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.860 "dma_device_type": 2 00:20:40.860 } 00:20:40.860 ], 00:20:40.860 "driver_specific": { 00:20:40.860 "raid": { 00:20:40.860 "uuid": "1c410086-b2cc-41eb-ba70-bf60fdb23bb0", 00:20:40.860 "strip_size_kb": 0, 00:20:40.860 "state": "online", 00:20:40.860 "raid_level": "raid1", 00:20:40.860 "superblock": true, 00:20:40.860 "num_base_bdevs": 2, 00:20:40.860 "num_base_bdevs_discovered": 2, 00:20:40.860 "num_base_bdevs_operational": 2, 00:20:40.860 "base_bdevs_list": [ 00:20:40.860 { 00:20:40.860 "name": "BaseBdev1", 00:20:40.860 "uuid": "461964a3-ee44-4980-85da-d6517af24161", 00:20:40.860 "is_configured": true, 00:20:40.860 "data_offset": 256, 00:20:40.860 "data_size": 7936 00:20:40.860 }, 00:20:40.860 { 00:20:40.860 "name": "BaseBdev2", 00:20:40.860 "uuid": "77870f80-db93-49b6-9765-3c36222c36fc", 00:20:40.860 "is_configured": true, 00:20:40.860 "data_offset": 256, 00:20:40.860 "data_size": 7936 00:20:40.860 } 00:20:40.860 ] 00:20:40.860 } 00:20:40.860 } 00:20:40.860 }' 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:40.860 BaseBdev2' 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.860 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.860 [2024-11-15 10:49:11.417163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.119 "name": "Existed_Raid", 00:20:41.119 "uuid": "1c410086-b2cc-41eb-ba70-bf60fdb23bb0", 00:20:41.119 "strip_size_kb": 0, 00:20:41.119 "state": "online", 00:20:41.119 "raid_level": "raid1", 00:20:41.119 "superblock": true, 00:20:41.119 "num_base_bdevs": 2, 00:20:41.119 "num_base_bdevs_discovered": 1, 00:20:41.119 "num_base_bdevs_operational": 1, 00:20:41.119 "base_bdevs_list": [ 00:20:41.119 { 00:20:41.119 "name": null, 00:20:41.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.119 "is_configured": false, 00:20:41.119 "data_offset": 0, 00:20:41.119 "data_size": 7936 00:20:41.119 }, 00:20:41.119 { 00:20:41.119 "name": "BaseBdev2", 00:20:41.119 "uuid": "77870f80-db93-49b6-9765-3c36222c36fc", 00:20:41.119 "is_configured": true, 00:20:41.119 "data_offset": 256, 00:20:41.119 "data_size": 7936 00:20:41.119 } 00:20:41.119 ] 00:20:41.119 }' 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.119 10:49:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.685 [2024-11-15 10:49:12.097514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:41.685 [2024-11-15 10:49:12.097650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:41.685 [2024-11-15 10:49:12.186336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.685 [2024-11-15 10:49:12.186414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.685 [2024-11-15 10:49:12.186435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87796 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87796 ']' 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87796 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:41.685 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87796 00:20:41.976 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:41.977 killing process with pid 87796 00:20:41.977 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:41.977 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87796' 00:20:41.977 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87796 00:20:41.977 [2024-11-15 10:49:12.263407] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:41.977 10:49:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87796 00:20:41.977 [2024-11-15 10:49:12.277723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:42.934 10:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:20:42.934 00:20:42.934 real 0m5.245s 00:20:42.934 user 0m8.210s 00:20:42.934 sys 0m0.646s 00:20:42.934 10:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:42.934 ************************************ 00:20:42.934 END TEST raid_state_function_test_sb_md_separate 00:20:42.934 ************************************ 00:20:42.934 10:49:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.934 10:49:13 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:20:42.934 10:49:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:42.934 10:49:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:42.934 10:49:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.934 ************************************ 00:20:42.934 START TEST raid_superblock_test_md_separate 00:20:42.934 ************************************ 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88045 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88045 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88045 ']' 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:42.934 10:49:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.934 [2024-11-15 10:49:13.444676] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:20:42.934 [2024-11-15 10:49:13.444834] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88045 ] 00:20:43.192 [2024-11-15 10:49:13.639210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.451 [2024-11-15 10:49:13.814505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.709 [2024-11-15 10:49:14.013884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.709 [2024-11-15 10:49:14.013970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.967 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.226 malloc1 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.226 [2024-11-15 10:49:14.559583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:44.226 [2024-11-15 10:49:14.559667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.226 [2024-11-15 10:49:14.559721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:44.226 [2024-11-15 10:49:14.559751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.226 [2024-11-15 10:49:14.562298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.226 [2024-11-15 10:49:14.562373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:44.226 pt1 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.226 malloc2 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.226 [2024-11-15 10:49:14.612091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:44.226 [2024-11-15 10:49:14.612368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.226 [2024-11-15 10:49:14.612563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:44.226 [2024-11-15 10:49:14.612607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.226 [2024-11-15 10:49:14.615293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.226 [2024-11-15 10:49:14.615481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:44.226 pt2 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.226 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.226 [2024-11-15 10:49:14.624378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:44.226 [2024-11-15 10:49:14.626944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:44.226 [2024-11-15 10:49:14.627267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:44.226 [2024-11-15 10:49:14.627292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:44.226 [2024-11-15 10:49:14.627497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:44.226 [2024-11-15 10:49:14.627785] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:44.226 [2024-11-15 10:49:14.627826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:44.227 [2024-11-15 10:49:14.628087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.227 "name": "raid_bdev1", 00:20:44.227 "uuid": "7203ae3f-e0a4-45be-8cc4-00fcddea1474", 00:20:44.227 "strip_size_kb": 0, 00:20:44.227 "state": "online", 00:20:44.227 "raid_level": "raid1", 00:20:44.227 "superblock": true, 00:20:44.227 "num_base_bdevs": 2, 00:20:44.227 "num_base_bdevs_discovered": 2, 00:20:44.227 "num_base_bdevs_operational": 2, 00:20:44.227 "base_bdevs_list": [ 00:20:44.227 { 00:20:44.227 "name": "pt1", 00:20:44.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:44.227 "is_configured": true, 00:20:44.227 "data_offset": 256, 00:20:44.227 "data_size": 7936 00:20:44.227 }, 00:20:44.227 { 00:20:44.227 "name": "pt2", 00:20:44.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:44.227 "is_configured": true, 00:20:44.227 "data_offset": 256, 00:20:44.227 "data_size": 7936 00:20:44.227 } 00:20:44.227 ] 00:20:44.227 }' 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.227 10:49:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:44.794 [2024-11-15 10:49:15.129069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:44.794 "name": "raid_bdev1", 00:20:44.794 "aliases": [ 00:20:44.794 "7203ae3f-e0a4-45be-8cc4-00fcddea1474" 00:20:44.794 ], 00:20:44.794 "product_name": "Raid Volume", 00:20:44.794 "block_size": 4096, 00:20:44.794 "num_blocks": 7936, 00:20:44.794 "uuid": "7203ae3f-e0a4-45be-8cc4-00fcddea1474", 00:20:44.794 "md_size": 32, 00:20:44.794 "md_interleave": false, 00:20:44.794 "dif_type": 0, 00:20:44.794 "assigned_rate_limits": { 00:20:44.794 "rw_ios_per_sec": 0, 00:20:44.794 "rw_mbytes_per_sec": 0, 00:20:44.794 "r_mbytes_per_sec": 0, 00:20:44.794 "w_mbytes_per_sec": 0 00:20:44.794 }, 00:20:44.794 "claimed": false, 00:20:44.794 "zoned": false, 00:20:44.794 "supported_io_types": { 00:20:44.794 "read": true, 00:20:44.794 "write": true, 00:20:44.794 "unmap": false, 00:20:44.794 "flush": false, 00:20:44.794 "reset": true, 00:20:44.794 "nvme_admin": false, 00:20:44.794 "nvme_io": false, 00:20:44.794 "nvme_io_md": false, 00:20:44.794 "write_zeroes": true, 00:20:44.794 "zcopy": false, 00:20:44.794 "get_zone_info": false, 00:20:44.794 "zone_management": false, 00:20:44.794 "zone_append": false, 00:20:44.794 "compare": false, 00:20:44.794 "compare_and_write": false, 00:20:44.794 "abort": false, 00:20:44.794 "seek_hole": false, 00:20:44.794 "seek_data": false, 00:20:44.794 "copy": false, 00:20:44.794 "nvme_iov_md": false 00:20:44.794 }, 00:20:44.794 "memory_domains": [ 00:20:44.794 { 00:20:44.794 "dma_device_id": "system", 00:20:44.794 "dma_device_type": 1 00:20:44.794 }, 00:20:44.794 { 00:20:44.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.794 "dma_device_type": 2 00:20:44.794 }, 00:20:44.794 { 00:20:44.794 "dma_device_id": "system", 00:20:44.794 "dma_device_type": 1 00:20:44.794 }, 00:20:44.794 { 00:20:44.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.794 "dma_device_type": 2 00:20:44.794 } 00:20:44.794 ], 00:20:44.794 "driver_specific": { 00:20:44.794 "raid": { 00:20:44.794 "uuid": "7203ae3f-e0a4-45be-8cc4-00fcddea1474", 00:20:44.794 "strip_size_kb": 0, 00:20:44.794 "state": "online", 00:20:44.794 "raid_level": "raid1", 00:20:44.794 "superblock": true, 00:20:44.794 "num_base_bdevs": 2, 00:20:44.794 "num_base_bdevs_discovered": 2, 00:20:44.794 "num_base_bdevs_operational": 2, 00:20:44.794 "base_bdevs_list": [ 00:20:44.794 { 00:20:44.794 "name": "pt1", 00:20:44.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:44.794 "is_configured": true, 00:20:44.794 "data_offset": 256, 00:20:44.794 "data_size": 7936 00:20:44.794 }, 00:20:44.794 { 00:20:44.794 "name": "pt2", 00:20:44.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:44.794 "is_configured": true, 00:20:44.794 "data_offset": 256, 00:20:44.794 "data_size": 7936 00:20:44.794 } 00:20:44.794 ] 00:20:44.794 } 00:20:44.794 } 00:20:44.794 }' 00:20:44.794 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:44.795 pt2' 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.795 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 [2024-11-15 10:49:15.365049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7203ae3f-e0a4-45be-8cc4-00fcddea1474 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 7203ae3f-e0a4-45be-8cc4-00fcddea1474 ']' 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 [2024-11-15 10:49:15.400556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.054 [2024-11-15 10:49:15.400597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:45.054 [2024-11-15 10:49:15.400752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.054 [2024-11-15 10:49:15.400842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:45.054 [2024-11-15 10:49:15.400865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 [2024-11-15 10:49:15.540702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:45.054 [2024-11-15 10:49:15.543180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:45.054 [2024-11-15 10:49:15.543308] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:45.054 [2024-11-15 10:49:15.543441] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:45.054 [2024-11-15 10:49:15.543474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.054 [2024-11-15 10:49:15.543491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:45.054 request: 00:20:45.054 { 00:20:45.054 "name": "raid_bdev1", 00:20:45.054 "raid_level": "raid1", 00:20:45.054 "base_bdevs": [ 00:20:45.054 "malloc1", 00:20:45.054 "malloc2" 00:20:45.054 ], 00:20:45.054 "superblock": false, 00:20:45.054 "method": "bdev_raid_create", 00:20:45.054 "req_id": 1 00:20:45.054 } 00:20:45.054 Got JSON-RPC error response 00:20:45.054 response: 00:20:45.054 { 00:20:45.054 "code": -17, 00:20:45.054 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:45.054 } 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:45.054 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.313 [2024-11-15 10:49:15.620716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:45.313 [2024-11-15 10:49:15.620808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.313 [2024-11-15 10:49:15.620837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:45.313 [2024-11-15 10:49:15.620853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.313 [2024-11-15 10:49:15.623439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.313 [2024-11-15 10:49:15.623497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:45.313 [2024-11-15 10:49:15.623578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:45.313 [2024-11-15 10:49:15.623682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:45.313 pt1 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.313 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.313 "name": "raid_bdev1", 00:20:45.313 "uuid": "7203ae3f-e0a4-45be-8cc4-00fcddea1474", 00:20:45.314 "strip_size_kb": 0, 00:20:45.314 "state": "configuring", 00:20:45.314 "raid_level": "raid1", 00:20:45.314 "superblock": true, 00:20:45.314 "num_base_bdevs": 2, 00:20:45.314 "num_base_bdevs_discovered": 1, 00:20:45.314 "num_base_bdevs_operational": 2, 00:20:45.314 "base_bdevs_list": [ 00:20:45.314 { 00:20:45.314 "name": "pt1", 00:20:45.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:45.314 "is_configured": true, 00:20:45.314 "data_offset": 256, 00:20:45.314 "data_size": 7936 00:20:45.314 }, 00:20:45.314 { 00:20:45.314 "name": null, 00:20:45.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:45.314 "is_configured": false, 00:20:45.314 "data_offset": 256, 00:20:45.314 "data_size": 7936 00:20:45.314 } 00:20:45.314 ] 00:20:45.314 }' 00:20:45.314 10:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.314 10:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.882 [2024-11-15 10:49:16.208868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:45.882 [2024-11-15 10:49:16.208984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.882 [2024-11-15 10:49:16.209018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:45.882 [2024-11-15 10:49:16.209035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.882 [2024-11-15 10:49:16.209396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.882 [2024-11-15 10:49:16.209430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:45.882 [2024-11-15 10:49:16.209505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:45.882 [2024-11-15 10:49:16.209545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:45.882 [2024-11-15 10:49:16.209693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:45.882 [2024-11-15 10:49:16.209715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:45.882 [2024-11-15 10:49:16.209809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:45.882 [2024-11-15 10:49:16.209962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:45.882 [2024-11-15 10:49:16.209978] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:45.882 [2024-11-15 10:49:16.210104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.882 pt2 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.882 "name": "raid_bdev1", 00:20:45.882 "uuid": "7203ae3f-e0a4-45be-8cc4-00fcddea1474", 00:20:45.882 "strip_size_kb": 0, 00:20:45.882 "state": "online", 00:20:45.882 "raid_level": "raid1", 00:20:45.882 "superblock": true, 00:20:45.882 "num_base_bdevs": 2, 00:20:45.882 "num_base_bdevs_discovered": 2, 00:20:45.882 "num_base_bdevs_operational": 2, 00:20:45.882 "base_bdevs_list": [ 00:20:45.882 { 00:20:45.882 "name": "pt1", 00:20:45.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:45.882 "is_configured": true, 00:20:45.882 "data_offset": 256, 00:20:45.882 "data_size": 7936 00:20:45.882 }, 00:20:45.882 { 00:20:45.882 "name": "pt2", 00:20:45.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:45.882 "is_configured": true, 00:20:45.882 "data_offset": 256, 00:20:45.882 "data_size": 7936 00:20:45.882 } 00:20:45.882 ] 00:20:45.882 }' 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.882 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.450 [2024-11-15 10:49:16.769331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.450 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:46.450 "name": "raid_bdev1", 00:20:46.450 "aliases": [ 00:20:46.450 "7203ae3f-e0a4-45be-8cc4-00fcddea1474" 00:20:46.450 ], 00:20:46.450 "product_name": "Raid Volume", 00:20:46.450 "block_size": 4096, 00:20:46.450 "num_blocks": 7936, 00:20:46.450 "uuid": "7203ae3f-e0a4-45be-8cc4-00fcddea1474", 00:20:46.450 "md_size": 32, 00:20:46.450 "md_interleave": false, 00:20:46.450 "dif_type": 0, 00:20:46.450 "assigned_rate_limits": { 00:20:46.450 "rw_ios_per_sec": 0, 00:20:46.450 "rw_mbytes_per_sec": 0, 00:20:46.450 "r_mbytes_per_sec": 0, 00:20:46.450 "w_mbytes_per_sec": 0 00:20:46.450 }, 00:20:46.450 "claimed": false, 00:20:46.450 "zoned": false, 00:20:46.450 "supported_io_types": { 00:20:46.450 "read": true, 00:20:46.450 "write": true, 00:20:46.450 "unmap": false, 00:20:46.450 "flush": false, 00:20:46.450 "reset": true, 00:20:46.450 "nvme_admin": false, 00:20:46.450 "nvme_io": false, 00:20:46.450 "nvme_io_md": false, 00:20:46.450 "write_zeroes": true, 00:20:46.450 "zcopy": false, 00:20:46.450 "get_zone_info": false, 00:20:46.450 "zone_management": false, 00:20:46.450 "zone_append": false, 00:20:46.450 "compare": false, 00:20:46.450 "compare_and_write": false, 00:20:46.450 "abort": false, 00:20:46.450 "seek_hole": false, 00:20:46.450 "seek_data": false, 00:20:46.450 "copy": false, 00:20:46.450 "nvme_iov_md": false 00:20:46.450 }, 00:20:46.450 "memory_domains": [ 00:20:46.450 { 00:20:46.450 "dma_device_id": "system", 00:20:46.450 "dma_device_type": 1 00:20:46.450 }, 00:20:46.450 { 00:20:46.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.450 "dma_device_type": 2 00:20:46.450 }, 00:20:46.450 { 00:20:46.450 "dma_device_id": "system", 00:20:46.450 "dma_device_type": 1 00:20:46.450 }, 00:20:46.450 { 00:20:46.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.450 "dma_device_type": 2 00:20:46.450 } 00:20:46.450 ], 00:20:46.450 "driver_specific": { 00:20:46.450 "raid": { 00:20:46.450 "uuid": "7203ae3f-e0a4-45be-8cc4-00fcddea1474", 00:20:46.450 "strip_size_kb": 0, 00:20:46.450 "state": "online", 00:20:46.450 "raid_level": "raid1", 00:20:46.450 "superblock": true, 00:20:46.450 "num_base_bdevs": 2, 00:20:46.450 "num_base_bdevs_discovered": 2, 00:20:46.450 "num_base_bdevs_operational": 2, 00:20:46.451 "base_bdevs_list": [ 00:20:46.451 { 00:20:46.451 "name": "pt1", 00:20:46.451 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:46.451 "is_configured": true, 00:20:46.451 "data_offset": 256, 00:20:46.451 "data_size": 7936 00:20:46.451 }, 00:20:46.451 { 00:20:46.451 "name": "pt2", 00:20:46.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:46.451 "is_configured": true, 00:20:46.451 "data_offset": 256, 00:20:46.451 "data_size": 7936 00:20:46.451 } 00:20:46.451 ] 00:20:46.451 } 00:20:46.451 } 00:20:46.451 }' 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:46.451 pt2' 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.451 10:49:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.711 [2024-11-15 10:49:17.017527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 7203ae3f-e0a4-45be-8cc4-00fcddea1474 '!=' 7203ae3f-e0a4-45be-8cc4-00fcddea1474 ']' 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.711 [2024-11-15 10:49:17.069208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.711 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.711 "name": "raid_bdev1", 00:20:46.711 "uuid": "7203ae3f-e0a4-45be-8cc4-00fcddea1474", 00:20:46.711 "strip_size_kb": 0, 00:20:46.711 "state": "online", 00:20:46.711 "raid_level": "raid1", 00:20:46.711 "superblock": true, 00:20:46.711 "num_base_bdevs": 2, 00:20:46.711 "num_base_bdevs_discovered": 1, 00:20:46.711 "num_base_bdevs_operational": 1, 00:20:46.711 "base_bdevs_list": [ 00:20:46.711 { 00:20:46.711 "name": null, 00:20:46.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.711 "is_configured": false, 00:20:46.711 "data_offset": 0, 00:20:46.712 "data_size": 7936 00:20:46.712 }, 00:20:46.712 { 00:20:46.712 "name": "pt2", 00:20:46.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:46.712 "is_configured": true, 00:20:46.712 "data_offset": 256, 00:20:46.712 "data_size": 7936 00:20:46.712 } 00:20:46.712 ] 00:20:46.712 }' 00:20:46.712 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.712 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.278 [2024-11-15 10:49:17.553221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:47.278 [2024-11-15 10:49:17.553262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:47.278 [2024-11-15 10:49:17.553384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:47.278 [2024-11-15 10:49:17.553457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:47.278 [2024-11-15 10:49:17.553478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.278 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.278 [2024-11-15 10:49:17.617225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:47.278 [2024-11-15 10:49:17.617314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.279 [2024-11-15 10:49:17.617341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:47.279 [2024-11-15 10:49:17.617376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.279 [2024-11-15 10:49:17.619956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.279 [2024-11-15 10:49:17.620019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:47.279 [2024-11-15 10:49:17.620104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:47.279 [2024-11-15 10:49:17.620170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:47.279 [2024-11-15 10:49:17.620294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:47.279 [2024-11-15 10:49:17.620317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:47.279 [2024-11-15 10:49:17.620433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:47.279 [2024-11-15 10:49:17.620591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:47.279 [2024-11-15 10:49:17.620615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:47.279 [2024-11-15 10:49:17.620741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.279 pt2 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.279 "name": "raid_bdev1", 00:20:47.279 "uuid": "7203ae3f-e0a4-45be-8cc4-00fcddea1474", 00:20:47.279 "strip_size_kb": 0, 00:20:47.279 "state": "online", 00:20:47.279 "raid_level": "raid1", 00:20:47.279 "superblock": true, 00:20:47.279 "num_base_bdevs": 2, 00:20:47.279 "num_base_bdevs_discovered": 1, 00:20:47.279 "num_base_bdevs_operational": 1, 00:20:47.279 "base_bdevs_list": [ 00:20:47.279 { 00:20:47.279 "name": null, 00:20:47.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.279 "is_configured": false, 00:20:47.279 "data_offset": 256, 00:20:47.279 "data_size": 7936 00:20:47.279 }, 00:20:47.279 { 00:20:47.279 "name": "pt2", 00:20:47.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:47.279 "is_configured": true, 00:20:47.279 "data_offset": 256, 00:20:47.279 "data_size": 7936 00:20:47.279 } 00:20:47.279 ] 00:20:47.279 }' 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.279 10:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.855 [2024-11-15 10:49:18.133380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:47.855 [2024-11-15 10:49:18.133428] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:47.855 [2024-11-15 10:49:18.133533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:47.855 [2024-11-15 10:49:18.133611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:47.855 [2024-11-15 10:49:18.133628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.855 [2024-11-15 10:49:18.193446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:47.855 [2024-11-15 10:49:18.193531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.855 [2024-11-15 10:49:18.193562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:47.855 [2024-11-15 10:49:18.193578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.855 [2024-11-15 10:49:18.196075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.855 [2024-11-15 10:49:18.196122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:47.855 [2024-11-15 10:49:18.196202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:47.855 [2024-11-15 10:49:18.196261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:47.855 [2024-11-15 10:49:18.196453] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:47.855 [2024-11-15 10:49:18.196480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:47.855 [2024-11-15 10:49:18.196509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:47.855 [2024-11-15 10:49:18.196599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:47.855 [2024-11-15 10:49:18.196707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:47.855 [2024-11-15 10:49:18.196724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:47.855 [2024-11-15 10:49:18.196807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:47.855 [2024-11-15 10:49:18.196945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:47.855 [2024-11-15 10:49:18.196965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:47.855 [2024-11-15 10:49:18.197100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.855 pt1 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.855 "name": "raid_bdev1", 00:20:47.855 "uuid": "7203ae3f-e0a4-45be-8cc4-00fcddea1474", 00:20:47.855 "strip_size_kb": 0, 00:20:47.855 "state": "online", 00:20:47.855 "raid_level": "raid1", 00:20:47.855 "superblock": true, 00:20:47.855 "num_base_bdevs": 2, 00:20:47.855 "num_base_bdevs_discovered": 1, 00:20:47.855 "num_base_bdevs_operational": 1, 00:20:47.855 "base_bdevs_list": [ 00:20:47.855 { 00:20:47.855 "name": null, 00:20:47.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.855 "is_configured": false, 00:20:47.855 "data_offset": 256, 00:20:47.855 "data_size": 7936 00:20:47.855 }, 00:20:47.855 { 00:20:47.855 "name": "pt2", 00:20:47.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:47.855 "is_configured": true, 00:20:47.855 "data_offset": 256, 00:20:47.855 "data_size": 7936 00:20:47.855 } 00:20:47.855 ] 00:20:47.855 }' 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.855 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:48.423 [2024-11-15 10:49:18.729882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 7203ae3f-e0a4-45be-8cc4-00fcddea1474 '!=' 7203ae3f-e0a4-45be-8cc4-00fcddea1474 ']' 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88045 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88045 ']' 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 88045 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88045 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:48.423 killing process with pid 88045 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88045' 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 88045 00:20:48.423 10:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 88045 00:20:48.423 [2024-11-15 10:49:18.802462] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:48.423 [2024-11-15 10:49:18.802635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.423 [2024-11-15 10:49:18.802756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.423 [2024-11-15 10:49:18.802801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:48.682 [2024-11-15 10:49:18.991973] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:49.618 10:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:20:49.618 00:20:49.618 real 0m6.667s 00:20:49.618 user 0m10.670s 00:20:49.618 sys 0m0.877s 00:20:49.618 10:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:49.618 10:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.618 ************************************ 00:20:49.618 END TEST raid_superblock_test_md_separate 00:20:49.618 ************************************ 00:20:49.618 10:49:20 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:20:49.618 10:49:20 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:20:49.618 10:49:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:49.618 10:49:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:49.618 10:49:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.618 ************************************ 00:20:49.618 START TEST raid_rebuild_test_sb_md_separate 00:20:49.618 ************************************ 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88375 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88375 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88375 ']' 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.618 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:49.619 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.619 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:49.619 10:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.619 [2024-11-15 10:49:20.152695] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:20:49.619 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:49.619 Zero copy mechanism will not be used. 00:20:49.619 [2024-11-15 10:49:20.152867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88375 ] 00:20:49.877 [2024-11-15 10:49:20.333021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.135 [2024-11-15 10:49:20.436900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.135 [2024-11-15 10:49:20.618664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.135 [2024-11-15 10:49:20.618740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.702 BaseBdev1_malloc 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.702 [2024-11-15 10:49:21.167280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:50.702 [2024-11-15 10:49:21.167369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.702 [2024-11-15 10:49:21.167405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:50.702 [2024-11-15 10:49:21.167425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.702 [2024-11-15 10:49:21.169791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.702 [2024-11-15 10:49:21.169842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:50.702 BaseBdev1 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.702 BaseBdev2_malloc 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.702 [2024-11-15 10:49:21.211975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:50.702 [2024-11-15 10:49:21.212056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.702 [2024-11-15 10:49:21.212087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:50.702 [2024-11-15 10:49:21.212104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.702 [2024-11-15 10:49:21.214496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.702 [2024-11-15 10:49:21.214547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:50.702 BaseBdev2 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.702 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.961 spare_malloc 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.961 spare_delay 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.961 [2024-11-15 10:49:21.275926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:50.961 [2024-11-15 10:49:21.276013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.961 [2024-11-15 10:49:21.276052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:50.961 [2024-11-15 10:49:21.276071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.961 [2024-11-15 10:49:21.278673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.961 [2024-11-15 10:49:21.278730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:50.961 spare 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.961 [2024-11-15 10:49:21.283989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:50.961 [2024-11-15 10:49:21.286374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:50.961 [2024-11-15 10:49:21.286658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:50.961 [2024-11-15 10:49:21.286683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:50.961 [2024-11-15 10:49:21.286813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:50.961 [2024-11-15 10:49:21.287037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:50.961 [2024-11-15 10:49:21.287056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:50.961 [2024-11-15 10:49:21.287209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:50.961 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.962 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.962 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.962 "name": "raid_bdev1", 00:20:50.962 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:50.962 "strip_size_kb": 0, 00:20:50.962 "state": "online", 00:20:50.962 "raid_level": "raid1", 00:20:50.962 "superblock": true, 00:20:50.962 "num_base_bdevs": 2, 00:20:50.962 "num_base_bdevs_discovered": 2, 00:20:50.962 "num_base_bdevs_operational": 2, 00:20:50.962 "base_bdevs_list": [ 00:20:50.962 { 00:20:50.962 "name": "BaseBdev1", 00:20:50.962 "uuid": "28eefe7d-bf66-56dc-b149-7372df900b28", 00:20:50.962 "is_configured": true, 00:20:50.962 "data_offset": 256, 00:20:50.962 "data_size": 7936 00:20:50.962 }, 00:20:50.962 { 00:20:50.962 "name": "BaseBdev2", 00:20:50.962 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:50.962 "is_configured": true, 00:20:50.962 "data_offset": 256, 00:20:50.962 "data_size": 7936 00:20:50.962 } 00:20:50.962 ] 00:20:50.962 }' 00:20:50.962 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.962 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.528 [2024-11-15 10:49:21.784659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:51.528 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:51.529 10:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:51.787 [2024-11-15 10:49:22.288503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:51.787 /dev/nbd0 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:51.787 1+0 records in 00:20:51.787 1+0 records out 00:20:51.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457685 s, 8.9 MB/s 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:20:51.787 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.046 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:52.046 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:20:52.046 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:52.046 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:52.046 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:52.046 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:52.046 10:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:52.980 7936+0 records in 00:20:52.980 7936+0 records out 00:20:52.980 32505856 bytes (33 MB, 31 MiB) copied, 1.12422 s, 28.9 MB/s 00:20:52.980 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:52.980 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:52.980 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:52.980 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:52.980 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:52.980 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.980 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:53.548 [2024-11-15 10:49:23.845513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.548 [2024-11-15 10:49:23.863590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.548 "name": "raid_bdev1", 00:20:53.548 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:53.548 "strip_size_kb": 0, 00:20:53.548 "state": "online", 00:20:53.548 "raid_level": "raid1", 00:20:53.548 "superblock": true, 00:20:53.548 "num_base_bdevs": 2, 00:20:53.548 "num_base_bdevs_discovered": 1, 00:20:53.548 "num_base_bdevs_operational": 1, 00:20:53.548 "base_bdevs_list": [ 00:20:53.548 { 00:20:53.548 "name": null, 00:20:53.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.548 "is_configured": false, 00:20:53.548 "data_offset": 0, 00:20:53.548 "data_size": 7936 00:20:53.548 }, 00:20:53.548 { 00:20:53.548 "name": "BaseBdev2", 00:20:53.548 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:53.548 "is_configured": true, 00:20:53.548 "data_offset": 256, 00:20:53.548 "data_size": 7936 00:20:53.548 } 00:20:53.548 ] 00:20:53.548 }' 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.548 10:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.115 10:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:54.115 10:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.115 10:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.115 [2024-11-15 10:49:24.403781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:54.115 [2024-11-15 10:49:24.418063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:54.115 10:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.115 10:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:54.115 [2024-11-15 10:49:24.420581] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.050 "name": "raid_bdev1", 00:20:55.050 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:55.050 "strip_size_kb": 0, 00:20:55.050 "state": "online", 00:20:55.050 "raid_level": "raid1", 00:20:55.050 "superblock": true, 00:20:55.050 "num_base_bdevs": 2, 00:20:55.050 "num_base_bdevs_discovered": 2, 00:20:55.050 "num_base_bdevs_operational": 2, 00:20:55.050 "process": { 00:20:55.050 "type": "rebuild", 00:20:55.050 "target": "spare", 00:20:55.050 "progress": { 00:20:55.050 "blocks": 2560, 00:20:55.050 "percent": 32 00:20:55.050 } 00:20:55.050 }, 00:20:55.050 "base_bdevs_list": [ 00:20:55.050 { 00:20:55.050 "name": "spare", 00:20:55.050 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:20:55.050 "is_configured": true, 00:20:55.050 "data_offset": 256, 00:20:55.050 "data_size": 7936 00:20:55.050 }, 00:20:55.050 { 00:20:55.050 "name": "BaseBdev2", 00:20:55.050 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:55.050 "is_configured": true, 00:20:55.050 "data_offset": 256, 00:20:55.050 "data_size": 7936 00:20:55.050 } 00:20:55.050 ] 00:20:55.050 }' 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.050 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.050 [2024-11-15 10:49:25.574379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.309 [2024-11-15 10:49:25.628875] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:55.309 [2024-11-15 10:49:25.629002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.309 [2024-11-15 10:49:25.629029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.309 [2024-11-15 10:49:25.629048] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.309 "name": "raid_bdev1", 00:20:55.309 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:55.309 "strip_size_kb": 0, 00:20:55.309 "state": "online", 00:20:55.309 "raid_level": "raid1", 00:20:55.309 "superblock": true, 00:20:55.309 "num_base_bdevs": 2, 00:20:55.309 "num_base_bdevs_discovered": 1, 00:20:55.309 "num_base_bdevs_operational": 1, 00:20:55.309 "base_bdevs_list": [ 00:20:55.309 { 00:20:55.309 "name": null, 00:20:55.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.309 "is_configured": false, 00:20:55.309 "data_offset": 0, 00:20:55.309 "data_size": 7936 00:20:55.309 }, 00:20:55.309 { 00:20:55.309 "name": "BaseBdev2", 00:20:55.309 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:55.309 "is_configured": true, 00:20:55.309 "data_offset": 256, 00:20:55.309 "data_size": 7936 00:20:55.309 } 00:20:55.309 ] 00:20:55.309 }' 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.309 10:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.876 "name": "raid_bdev1", 00:20:55.876 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:55.876 "strip_size_kb": 0, 00:20:55.876 "state": "online", 00:20:55.876 "raid_level": "raid1", 00:20:55.876 "superblock": true, 00:20:55.876 "num_base_bdevs": 2, 00:20:55.876 "num_base_bdevs_discovered": 1, 00:20:55.876 "num_base_bdevs_operational": 1, 00:20:55.876 "base_bdevs_list": [ 00:20:55.876 { 00:20:55.876 "name": null, 00:20:55.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.876 "is_configured": false, 00:20:55.876 "data_offset": 0, 00:20:55.876 "data_size": 7936 00:20:55.876 }, 00:20:55.876 { 00:20:55.876 "name": "BaseBdev2", 00:20:55.876 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:55.876 "is_configured": true, 00:20:55.876 "data_offset": 256, 00:20:55.876 "data_size": 7936 00:20:55.876 } 00:20:55.876 ] 00:20:55.876 }' 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.876 [2024-11-15 10:49:26.306621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:55.876 [2024-11-15 10:49:26.319509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.876 10:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:55.876 [2024-11-15 10:49:26.321915] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:56.812 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.812 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.812 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:56.812 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:56.812 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.812 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.812 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.812 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.812 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.812 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.071 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.071 "name": "raid_bdev1", 00:20:57.071 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:57.071 "strip_size_kb": 0, 00:20:57.071 "state": "online", 00:20:57.071 "raid_level": "raid1", 00:20:57.071 "superblock": true, 00:20:57.071 "num_base_bdevs": 2, 00:20:57.072 "num_base_bdevs_discovered": 2, 00:20:57.072 "num_base_bdevs_operational": 2, 00:20:57.072 "process": { 00:20:57.072 "type": "rebuild", 00:20:57.072 "target": "spare", 00:20:57.072 "progress": { 00:20:57.072 "blocks": 2560, 00:20:57.072 "percent": 32 00:20:57.072 } 00:20:57.072 }, 00:20:57.072 "base_bdevs_list": [ 00:20:57.072 { 00:20:57.072 "name": "spare", 00:20:57.072 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:20:57.072 "is_configured": true, 00:20:57.072 "data_offset": 256, 00:20:57.072 "data_size": 7936 00:20:57.072 }, 00:20:57.072 { 00:20:57.072 "name": "BaseBdev2", 00:20:57.072 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:57.072 "is_configured": true, 00:20:57.072 "data_offset": 256, 00:20:57.072 "data_size": 7936 00:20:57.072 } 00:20:57.072 ] 00:20:57.072 }' 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:57.072 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=761 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.072 "name": "raid_bdev1", 00:20:57.072 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:57.072 "strip_size_kb": 0, 00:20:57.072 "state": "online", 00:20:57.072 "raid_level": "raid1", 00:20:57.072 "superblock": true, 00:20:57.072 "num_base_bdevs": 2, 00:20:57.072 "num_base_bdevs_discovered": 2, 00:20:57.072 "num_base_bdevs_operational": 2, 00:20:57.072 "process": { 00:20:57.072 "type": "rebuild", 00:20:57.072 "target": "spare", 00:20:57.072 "progress": { 00:20:57.072 "blocks": 2816, 00:20:57.072 "percent": 35 00:20:57.072 } 00:20:57.072 }, 00:20:57.072 "base_bdevs_list": [ 00:20:57.072 { 00:20:57.072 "name": "spare", 00:20:57.072 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:20:57.072 "is_configured": true, 00:20:57.072 "data_offset": 256, 00:20:57.072 "data_size": 7936 00:20:57.072 }, 00:20:57.072 { 00:20:57.072 "name": "BaseBdev2", 00:20:57.072 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:57.072 "is_configured": true, 00:20:57.072 "data_offset": 256, 00:20:57.072 "data_size": 7936 00:20:57.072 } 00:20:57.072 ] 00:20:57.072 }' 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.072 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.331 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.331 10:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.266 "name": "raid_bdev1", 00:20:58.266 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:58.266 "strip_size_kb": 0, 00:20:58.266 "state": "online", 00:20:58.266 "raid_level": "raid1", 00:20:58.266 "superblock": true, 00:20:58.266 "num_base_bdevs": 2, 00:20:58.266 "num_base_bdevs_discovered": 2, 00:20:58.266 "num_base_bdevs_operational": 2, 00:20:58.266 "process": { 00:20:58.266 "type": "rebuild", 00:20:58.266 "target": "spare", 00:20:58.266 "progress": { 00:20:58.266 "blocks": 5888, 00:20:58.266 "percent": 74 00:20:58.266 } 00:20:58.266 }, 00:20:58.266 "base_bdevs_list": [ 00:20:58.266 { 00:20:58.266 "name": "spare", 00:20:58.266 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:20:58.266 "is_configured": true, 00:20:58.266 "data_offset": 256, 00:20:58.266 "data_size": 7936 00:20:58.266 }, 00:20:58.266 { 00:20:58.266 "name": "BaseBdev2", 00:20:58.266 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:58.266 "is_configured": true, 00:20:58.266 "data_offset": 256, 00:20:58.266 "data_size": 7936 00:20:58.266 } 00:20:58.266 ] 00:20:58.266 }' 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.266 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.524 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.524 10:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.124 [2024-11-15 10:49:29.441603] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:59.124 [2024-11-15 10:49:29.441940] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:59.124 [2024-11-15 10:49:29.442166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.393 "name": "raid_bdev1", 00:20:59.393 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:59.393 "strip_size_kb": 0, 00:20:59.393 "state": "online", 00:20:59.393 "raid_level": "raid1", 00:20:59.393 "superblock": true, 00:20:59.393 "num_base_bdevs": 2, 00:20:59.393 "num_base_bdevs_discovered": 2, 00:20:59.393 "num_base_bdevs_operational": 2, 00:20:59.393 "base_bdevs_list": [ 00:20:59.393 { 00:20:59.393 "name": "spare", 00:20:59.393 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:20:59.393 "is_configured": true, 00:20:59.393 "data_offset": 256, 00:20:59.393 "data_size": 7936 00:20:59.393 }, 00:20:59.393 { 00:20:59.393 "name": "BaseBdev2", 00:20:59.393 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:59.393 "is_configured": true, 00:20:59.393 "data_offset": 256, 00:20:59.393 "data_size": 7936 00:20:59.393 } 00:20:59.393 ] 00:20:59.393 }' 00:20:59.393 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.652 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:59.652 10:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.652 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:59.652 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:20:59.652 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.653 "name": "raid_bdev1", 00:20:59.653 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:59.653 "strip_size_kb": 0, 00:20:59.653 "state": "online", 00:20:59.653 "raid_level": "raid1", 00:20:59.653 "superblock": true, 00:20:59.653 "num_base_bdevs": 2, 00:20:59.653 "num_base_bdevs_discovered": 2, 00:20:59.653 "num_base_bdevs_operational": 2, 00:20:59.653 "base_bdevs_list": [ 00:20:59.653 { 00:20:59.653 "name": "spare", 00:20:59.653 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:20:59.653 "is_configured": true, 00:20:59.653 "data_offset": 256, 00:20:59.653 "data_size": 7936 00:20:59.653 }, 00:20:59.653 { 00:20:59.653 "name": "BaseBdev2", 00:20:59.653 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:59.653 "is_configured": true, 00:20:59.653 "data_offset": 256, 00:20:59.653 "data_size": 7936 00:20:59.653 } 00:20:59.653 ] 00:20:59.653 }' 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.653 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.911 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.911 "name": "raid_bdev1", 00:20:59.911 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:20:59.911 "strip_size_kb": 0, 00:20:59.911 "state": "online", 00:20:59.911 "raid_level": "raid1", 00:20:59.911 "superblock": true, 00:20:59.911 "num_base_bdevs": 2, 00:20:59.911 "num_base_bdevs_discovered": 2, 00:20:59.911 "num_base_bdevs_operational": 2, 00:20:59.911 "base_bdevs_list": [ 00:20:59.911 { 00:20:59.911 "name": "spare", 00:20:59.911 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:20:59.911 "is_configured": true, 00:20:59.911 "data_offset": 256, 00:20:59.911 "data_size": 7936 00:20:59.911 }, 00:20:59.911 { 00:20:59.911 "name": "BaseBdev2", 00:20:59.911 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:20:59.911 "is_configured": true, 00:20:59.911 "data_offset": 256, 00:20:59.911 "data_size": 7936 00:20:59.911 } 00:20:59.911 ] 00:20:59.911 }' 00:20:59.911 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.911 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.477 [2024-11-15 10:49:30.744526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:00.477 [2024-11-15 10:49:30.744825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.477 [2024-11-15 10:49:30.745004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.477 [2024-11-15 10:49:30.745130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.477 [2024-11-15 10:49:30.745157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:00.477 10:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:00.737 /dev/nbd0 00:21:00.995 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:00.995 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:00.995 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:00.995 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:21:00.995 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:00.995 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:00.996 1+0 records in 00:21:00.996 1+0 records out 00:21:00.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455146 s, 9.0 MB/s 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:00.996 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:01.253 /dev/nbd1 00:21:01.253 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.511 1+0 records in 00:21:01.511 1+0 records out 00:21:01.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491622 s, 8.3 MB/s 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.511 10:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:01.511 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:01.511 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:01.511 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:01.511 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:01.511 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:01.511 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:01.511 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:02.077 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:02.077 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:02.077 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:02.077 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.077 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.077 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:02.077 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:02.077 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.077 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.077 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.336 [2024-11-15 10:49:32.759344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:02.336 [2024-11-15 10:49:32.759478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.336 [2024-11-15 10:49:32.759541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:02.336 [2024-11-15 10:49:32.759570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.336 [2024-11-15 10:49:32.762992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.336 [2024-11-15 10:49:32.763064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:02.336 [2024-11-15 10:49:32.763201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:02.336 [2024-11-15 10:49:32.763301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.336 [2024-11-15 10:49:32.763645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:02.336 spare 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.336 [2024-11-15 10:49:32.863827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:02.336 [2024-11-15 10:49:32.863906] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:02.336 [2024-11-15 10:49:32.864081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:02.336 [2024-11-15 10:49:32.864307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:02.336 [2024-11-15 10:49:32.864323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:02.336 [2024-11-15 10:49:32.864540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.336 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.594 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.594 "name": "raid_bdev1", 00:21:02.594 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:02.594 "strip_size_kb": 0, 00:21:02.594 "state": "online", 00:21:02.594 "raid_level": "raid1", 00:21:02.594 "superblock": true, 00:21:02.594 "num_base_bdevs": 2, 00:21:02.594 "num_base_bdevs_discovered": 2, 00:21:02.594 "num_base_bdevs_operational": 2, 00:21:02.594 "base_bdevs_list": [ 00:21:02.594 { 00:21:02.594 "name": "spare", 00:21:02.594 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:21:02.594 "is_configured": true, 00:21:02.594 "data_offset": 256, 00:21:02.594 "data_size": 7936 00:21:02.594 }, 00:21:02.594 { 00:21:02.594 "name": "BaseBdev2", 00:21:02.594 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:02.594 "is_configured": true, 00:21:02.594 "data_offset": 256, 00:21:02.594 "data_size": 7936 00:21:02.594 } 00:21:02.594 ] 00:21:02.594 }' 00:21:02.594 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.594 10:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.162 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.162 "name": "raid_bdev1", 00:21:03.162 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:03.162 "strip_size_kb": 0, 00:21:03.162 "state": "online", 00:21:03.162 "raid_level": "raid1", 00:21:03.162 "superblock": true, 00:21:03.162 "num_base_bdevs": 2, 00:21:03.162 "num_base_bdevs_discovered": 2, 00:21:03.162 "num_base_bdevs_operational": 2, 00:21:03.162 "base_bdevs_list": [ 00:21:03.162 { 00:21:03.162 "name": "spare", 00:21:03.162 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:21:03.162 "is_configured": true, 00:21:03.162 "data_offset": 256, 00:21:03.162 "data_size": 7936 00:21:03.162 }, 00:21:03.162 { 00:21:03.162 "name": "BaseBdev2", 00:21:03.162 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:03.162 "is_configured": true, 00:21:03.162 "data_offset": 256, 00:21:03.162 "data_size": 7936 00:21:03.162 } 00:21:03.162 ] 00:21:03.162 }' 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.163 [2024-11-15 10:49:33.643688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.163 "name": "raid_bdev1", 00:21:03.163 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:03.163 "strip_size_kb": 0, 00:21:03.163 "state": "online", 00:21:03.163 "raid_level": "raid1", 00:21:03.163 "superblock": true, 00:21:03.163 "num_base_bdevs": 2, 00:21:03.163 "num_base_bdevs_discovered": 1, 00:21:03.163 "num_base_bdevs_operational": 1, 00:21:03.163 "base_bdevs_list": [ 00:21:03.163 { 00:21:03.163 "name": null, 00:21:03.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.163 "is_configured": false, 00:21:03.163 "data_offset": 0, 00:21:03.163 "data_size": 7936 00:21:03.163 }, 00:21:03.163 { 00:21:03.163 "name": "BaseBdev2", 00:21:03.163 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:03.163 "is_configured": true, 00:21:03.163 "data_offset": 256, 00:21:03.163 "data_size": 7936 00:21:03.163 } 00:21:03.163 ] 00:21:03.163 }' 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.163 10:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.731 10:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:03.731 10:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.731 10:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.731 [2024-11-15 10:49:34.200154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.731 [2024-11-15 10:49:34.200432] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:03.731 [2024-11-15 10:49:34.200463] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:03.731 [2024-11-15 10:49:34.200514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.731 [2024-11-15 10:49:34.212874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:03.731 10:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.731 10:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:03.731 [2024-11-15 10:49:34.215240] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.676 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.676 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.676 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:04.676 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:04.676 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.676 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.676 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.676 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.676 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.948 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.948 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.948 "name": "raid_bdev1", 00:21:04.948 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:04.948 "strip_size_kb": 0, 00:21:04.948 "state": "online", 00:21:04.948 "raid_level": "raid1", 00:21:04.948 "superblock": true, 00:21:04.948 "num_base_bdevs": 2, 00:21:04.948 "num_base_bdevs_discovered": 2, 00:21:04.948 "num_base_bdevs_operational": 2, 00:21:04.948 "process": { 00:21:04.948 "type": "rebuild", 00:21:04.948 "target": "spare", 00:21:04.948 "progress": { 00:21:04.948 "blocks": 2560, 00:21:04.948 "percent": 32 00:21:04.948 } 00:21:04.948 }, 00:21:04.948 "base_bdevs_list": [ 00:21:04.948 { 00:21:04.948 "name": "spare", 00:21:04.948 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:21:04.948 "is_configured": true, 00:21:04.948 "data_offset": 256, 00:21:04.948 "data_size": 7936 00:21:04.948 }, 00:21:04.948 { 00:21:04.948 "name": "BaseBdev2", 00:21:04.948 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:04.948 "is_configured": true, 00:21:04.948 "data_offset": 256, 00:21:04.948 "data_size": 7936 00:21:04.948 } 00:21:04.948 ] 00:21:04.948 }' 00:21:04.948 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.949 [2024-11-15 10:49:35.405155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.949 [2024-11-15 10:49:35.422664] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:04.949 [2024-11-15 10:49:35.423017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.949 [2024-11-15 10:49:35.423158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.949 [2024-11-15 10:49:35.423233] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.949 "name": "raid_bdev1", 00:21:04.949 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:04.949 "strip_size_kb": 0, 00:21:04.949 "state": "online", 00:21:04.949 "raid_level": "raid1", 00:21:04.949 "superblock": true, 00:21:04.949 "num_base_bdevs": 2, 00:21:04.949 "num_base_bdevs_discovered": 1, 00:21:04.949 "num_base_bdevs_operational": 1, 00:21:04.949 "base_bdevs_list": [ 00:21:04.949 { 00:21:04.949 "name": null, 00:21:04.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.949 "is_configured": false, 00:21:04.949 "data_offset": 0, 00:21:04.949 "data_size": 7936 00:21:04.949 }, 00:21:04.949 { 00:21:04.949 "name": "BaseBdev2", 00:21:04.949 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:04.949 "is_configured": true, 00:21:04.949 "data_offset": 256, 00:21:04.949 "data_size": 7936 00:21:04.949 } 00:21:04.949 ] 00:21:04.949 }' 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.949 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.516 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:05.516 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.516 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.516 [2024-11-15 10:49:35.929282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:05.516 [2024-11-15 10:49:35.929402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.516 [2024-11-15 10:49:35.929443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:05.516 [2024-11-15 10:49:35.929461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.516 [2024-11-15 10:49:35.929764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.516 [2024-11-15 10:49:35.929797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:05.516 [2024-11-15 10:49:35.929877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:05.516 [2024-11-15 10:49:35.929902] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:05.516 [2024-11-15 10:49:35.929915] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:05.516 [2024-11-15 10:49:35.929946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.516 [2024-11-15 10:49:35.942502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:05.516 spare 00:21:05.516 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.516 10:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:05.516 [2024-11-15 10:49:35.944821] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.452 10:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.452 "name": "raid_bdev1", 00:21:06.452 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:06.452 "strip_size_kb": 0, 00:21:06.452 "state": "online", 00:21:06.452 "raid_level": "raid1", 00:21:06.452 "superblock": true, 00:21:06.452 "num_base_bdevs": 2, 00:21:06.452 "num_base_bdevs_discovered": 2, 00:21:06.452 "num_base_bdevs_operational": 2, 00:21:06.452 "process": { 00:21:06.453 "type": "rebuild", 00:21:06.453 "target": "spare", 00:21:06.453 "progress": { 00:21:06.453 "blocks": 2560, 00:21:06.453 "percent": 32 00:21:06.453 } 00:21:06.453 }, 00:21:06.453 "base_bdevs_list": [ 00:21:06.453 { 00:21:06.453 "name": "spare", 00:21:06.453 "uuid": "52451af7-2934-50ea-930e-4ce430b07b49", 00:21:06.453 "is_configured": true, 00:21:06.453 "data_offset": 256, 00:21:06.453 "data_size": 7936 00:21:06.453 }, 00:21:06.453 { 00:21:06.453 "name": "BaseBdev2", 00:21:06.453 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:06.453 "is_configured": true, 00:21:06.453 "data_offset": 256, 00:21:06.453 "data_size": 7936 00:21:06.453 } 00:21:06.453 ] 00:21:06.453 }' 00:21:06.453 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.711 [2024-11-15 10:49:37.123154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.711 [2024-11-15 10:49:37.152090] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:06.711 [2024-11-15 10:49:37.152198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.711 [2024-11-15 10:49:37.152227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.711 [2024-11-15 10:49:37.152240] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.711 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.712 "name": "raid_bdev1", 00:21:06.712 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:06.712 "strip_size_kb": 0, 00:21:06.712 "state": "online", 00:21:06.712 "raid_level": "raid1", 00:21:06.712 "superblock": true, 00:21:06.712 "num_base_bdevs": 2, 00:21:06.712 "num_base_bdevs_discovered": 1, 00:21:06.712 "num_base_bdevs_operational": 1, 00:21:06.712 "base_bdevs_list": [ 00:21:06.712 { 00:21:06.712 "name": null, 00:21:06.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.712 "is_configured": false, 00:21:06.712 "data_offset": 0, 00:21:06.712 "data_size": 7936 00:21:06.712 }, 00:21:06.712 { 00:21:06.712 "name": "BaseBdev2", 00:21:06.712 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:06.712 "is_configured": true, 00:21:06.712 "data_offset": 256, 00:21:06.712 "data_size": 7936 00:21:06.712 } 00:21:06.712 ] 00:21:06.712 }' 00:21:06.712 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.712 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.276 "name": "raid_bdev1", 00:21:07.276 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:07.276 "strip_size_kb": 0, 00:21:07.276 "state": "online", 00:21:07.276 "raid_level": "raid1", 00:21:07.276 "superblock": true, 00:21:07.276 "num_base_bdevs": 2, 00:21:07.276 "num_base_bdevs_discovered": 1, 00:21:07.276 "num_base_bdevs_operational": 1, 00:21:07.276 "base_bdevs_list": [ 00:21:07.276 { 00:21:07.276 "name": null, 00:21:07.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.276 "is_configured": false, 00:21:07.276 "data_offset": 0, 00:21:07.276 "data_size": 7936 00:21:07.276 }, 00:21:07.276 { 00:21:07.276 "name": "BaseBdev2", 00:21:07.276 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:07.276 "is_configured": true, 00:21:07.276 "data_offset": 256, 00:21:07.276 "data_size": 7936 00:21:07.276 } 00:21:07.276 ] 00:21:07.276 }' 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.276 [2024-11-15 10:49:37.817557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:07.276 [2024-11-15 10:49:37.817643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.276 [2024-11-15 10:49:37.817683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:07.276 [2024-11-15 10:49:37.817699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.276 [2024-11-15 10:49:37.817978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.276 [2024-11-15 10:49:37.818003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:07.276 [2024-11-15 10:49:37.818076] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:07.276 [2024-11-15 10:49:37.818107] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:07.276 [2024-11-15 10:49:37.818119] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:07.276 [2024-11-15 10:49:37.818132] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:07.276 BaseBdev1 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.276 10:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.648 "name": "raid_bdev1", 00:21:08.648 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:08.648 "strip_size_kb": 0, 00:21:08.648 "state": "online", 00:21:08.648 "raid_level": "raid1", 00:21:08.648 "superblock": true, 00:21:08.648 "num_base_bdevs": 2, 00:21:08.648 "num_base_bdevs_discovered": 1, 00:21:08.648 "num_base_bdevs_operational": 1, 00:21:08.648 "base_bdevs_list": [ 00:21:08.648 { 00:21:08.648 "name": null, 00:21:08.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.648 "is_configured": false, 00:21:08.648 "data_offset": 0, 00:21:08.648 "data_size": 7936 00:21:08.648 }, 00:21:08.648 { 00:21:08.648 "name": "BaseBdev2", 00:21:08.648 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:08.648 "is_configured": true, 00:21:08.648 "data_offset": 256, 00:21:08.648 "data_size": 7936 00:21:08.648 } 00:21:08.648 ] 00:21:08.648 }' 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.648 10:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:08.906 "name": "raid_bdev1", 00:21:08.906 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:08.906 "strip_size_kb": 0, 00:21:08.906 "state": "online", 00:21:08.906 "raid_level": "raid1", 00:21:08.906 "superblock": true, 00:21:08.906 "num_base_bdevs": 2, 00:21:08.906 "num_base_bdevs_discovered": 1, 00:21:08.906 "num_base_bdevs_operational": 1, 00:21:08.906 "base_bdevs_list": [ 00:21:08.906 { 00:21:08.906 "name": null, 00:21:08.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.906 "is_configured": false, 00:21:08.906 "data_offset": 0, 00:21:08.906 "data_size": 7936 00:21:08.906 }, 00:21:08.906 { 00:21:08.906 "name": "BaseBdev2", 00:21:08.906 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:08.906 "is_configured": true, 00:21:08.906 "data_offset": 256, 00:21:08.906 "data_size": 7936 00:21:08.906 } 00:21:08.906 ] 00:21:08.906 }' 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:08.906 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:09.164 [2024-11-15 10:49:39.490071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.164 [2024-11-15 10:49:39.490278] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:09.164 [2024-11-15 10:49:39.490309] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:09.164 request: 00:21:09.164 { 00:21:09.164 "base_bdev": "BaseBdev1", 00:21:09.164 "raid_bdev": "raid_bdev1", 00:21:09.164 "method": "bdev_raid_add_base_bdev", 00:21:09.164 "req_id": 1 00:21:09.164 } 00:21:09.164 Got JSON-RPC error response 00:21:09.164 response: 00:21:09.164 { 00:21:09.164 "code": -22, 00:21:09.164 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:09.164 } 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:09.164 10:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.099 "name": "raid_bdev1", 00:21:10.099 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:10.099 "strip_size_kb": 0, 00:21:10.099 "state": "online", 00:21:10.099 "raid_level": "raid1", 00:21:10.099 "superblock": true, 00:21:10.099 "num_base_bdevs": 2, 00:21:10.099 "num_base_bdevs_discovered": 1, 00:21:10.099 "num_base_bdevs_operational": 1, 00:21:10.099 "base_bdevs_list": [ 00:21:10.099 { 00:21:10.099 "name": null, 00:21:10.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.099 "is_configured": false, 00:21:10.099 "data_offset": 0, 00:21:10.099 "data_size": 7936 00:21:10.099 }, 00:21:10.099 { 00:21:10.099 "name": "BaseBdev2", 00:21:10.099 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:10.099 "is_configured": true, 00:21:10.099 "data_offset": 256, 00:21:10.099 "data_size": 7936 00:21:10.099 } 00:21:10.099 ] 00:21:10.099 }' 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.099 10:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.689 "name": "raid_bdev1", 00:21:10.689 "uuid": "52625e7c-e4f3-47de-ab14-306da52493af", 00:21:10.689 "strip_size_kb": 0, 00:21:10.689 "state": "online", 00:21:10.689 "raid_level": "raid1", 00:21:10.689 "superblock": true, 00:21:10.689 "num_base_bdevs": 2, 00:21:10.689 "num_base_bdevs_discovered": 1, 00:21:10.689 "num_base_bdevs_operational": 1, 00:21:10.689 "base_bdevs_list": [ 00:21:10.689 { 00:21:10.689 "name": null, 00:21:10.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.689 "is_configured": false, 00:21:10.689 "data_offset": 0, 00:21:10.689 "data_size": 7936 00:21:10.689 }, 00:21:10.689 { 00:21:10.689 "name": "BaseBdev2", 00:21:10.689 "uuid": "696edee0-91e9-554a-8e5b-432430308273", 00:21:10.689 "is_configured": true, 00:21:10.689 "data_offset": 256, 00:21:10.689 "data_size": 7936 00:21:10.689 } 00:21:10.689 ] 00:21:10.689 }' 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88375 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88375 ']' 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 88375 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88375 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:10.689 killing process with pid 88375 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88375' 00:21:10.689 Received shutdown signal, test time was about 60.000000 seconds 00:21:10.689 00:21:10.689 Latency(us) 00:21:10.689 [2024-11-15T10:49:41.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.689 [2024-11-15T10:49:41.249Z] =================================================================================================================== 00:21:10.689 [2024-11-15T10:49:41.249Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 88375 00:21:10.689 [2024-11-15 10:49:41.201866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:10.689 10:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 88375 00:21:10.689 [2024-11-15 10:49:41.202035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:10.689 [2024-11-15 10:49:41.202103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:10.689 [2024-11-15 10:49:41.202123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:10.949 [2024-11-15 10:49:41.485605] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:12.326 10:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:21:12.326 00:21:12.326 real 0m22.465s 00:21:12.326 user 0m30.696s 00:21:12.326 sys 0m2.649s 00:21:12.326 10:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:12.326 ************************************ 00:21:12.326 END TEST raid_rebuild_test_sb_md_separate 00:21:12.326 ************************************ 00:21:12.326 10:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.326 10:49:42 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:21:12.326 10:49:42 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:21:12.326 10:49:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:12.326 10:49:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:12.326 10:49:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:12.326 ************************************ 00:21:12.326 START TEST raid_state_function_test_sb_md_interleaved 00:21:12.326 ************************************ 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89083 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89083' 00:21:12.326 Process raid pid: 89083 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89083 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89083 ']' 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:12.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:12.326 10:49:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.326 [2024-11-15 10:49:42.690779] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:21:12.326 [2024-11-15 10:49:42.691050] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.585 [2024-11-15 10:49:42.891428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.585 [2024-11-15 10:49:43.015004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.844 [2024-11-15 10:49:43.207068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:12.844 [2024-11-15 10:49:43.207117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.103 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:13.103 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:21:13.103 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:13.103 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.103 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.103 [2024-11-15 10:49:43.655894] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:13.103 [2024-11-15 10:49:43.655962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:13.103 [2024-11-15 10:49:43.655979] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:13.103 [2024-11-15 10:49:43.655996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:13.103 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.363 "name": "Existed_Raid", 00:21:13.363 "uuid": "e5f0a2f4-89e2-401b-be35-47bb8c207198", 00:21:13.363 "strip_size_kb": 0, 00:21:13.363 "state": "configuring", 00:21:13.363 "raid_level": "raid1", 00:21:13.363 "superblock": true, 00:21:13.363 "num_base_bdevs": 2, 00:21:13.363 "num_base_bdevs_discovered": 0, 00:21:13.363 "num_base_bdevs_operational": 2, 00:21:13.363 "base_bdevs_list": [ 00:21:13.363 { 00:21:13.363 "name": "BaseBdev1", 00:21:13.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.363 "is_configured": false, 00:21:13.363 "data_offset": 0, 00:21:13.363 "data_size": 0 00:21:13.363 }, 00:21:13.363 { 00:21:13.363 "name": "BaseBdev2", 00:21:13.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.363 "is_configured": false, 00:21:13.363 "data_offset": 0, 00:21:13.363 "data_size": 0 00:21:13.363 } 00:21:13.363 ] 00:21:13.363 }' 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.363 10:49:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.622 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:13.622 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.622 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.622 [2024-11-15 10:49:44.167981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:13.622 [2024-11-15 10:49:44.168028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:13.622 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.622 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:13.622 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.622 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.622 [2024-11-15 10:49:44.175975] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:13.622 [2024-11-15 10:49:44.176036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:13.622 [2024-11-15 10:49:44.176053] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:13.622 [2024-11-15 10:49:44.176073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.881 [2024-11-15 10:49:44.218280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:13.881 BaseBdev1 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:13.881 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.882 [ 00:21:13.882 { 00:21:13.882 "name": "BaseBdev1", 00:21:13.882 "aliases": [ 00:21:13.882 "bcc92693-cfd2-4924-b4ab-aa45cced1dd5" 00:21:13.882 ], 00:21:13.882 "product_name": "Malloc disk", 00:21:13.882 "block_size": 4128, 00:21:13.882 "num_blocks": 8192, 00:21:13.882 "uuid": "bcc92693-cfd2-4924-b4ab-aa45cced1dd5", 00:21:13.882 "md_size": 32, 00:21:13.882 "md_interleave": true, 00:21:13.882 "dif_type": 0, 00:21:13.882 "assigned_rate_limits": { 00:21:13.882 "rw_ios_per_sec": 0, 00:21:13.882 "rw_mbytes_per_sec": 0, 00:21:13.882 "r_mbytes_per_sec": 0, 00:21:13.882 "w_mbytes_per_sec": 0 00:21:13.882 }, 00:21:13.882 "claimed": true, 00:21:13.882 "claim_type": "exclusive_write", 00:21:13.882 "zoned": false, 00:21:13.882 "supported_io_types": { 00:21:13.882 "read": true, 00:21:13.882 "write": true, 00:21:13.882 "unmap": true, 00:21:13.882 "flush": true, 00:21:13.882 "reset": true, 00:21:13.882 "nvme_admin": false, 00:21:13.882 "nvme_io": false, 00:21:13.882 "nvme_io_md": false, 00:21:13.882 "write_zeroes": true, 00:21:13.882 "zcopy": true, 00:21:13.882 "get_zone_info": false, 00:21:13.882 "zone_management": false, 00:21:13.882 "zone_append": false, 00:21:13.882 "compare": false, 00:21:13.882 "compare_and_write": false, 00:21:13.882 "abort": true, 00:21:13.882 "seek_hole": false, 00:21:13.882 "seek_data": false, 00:21:13.882 "copy": true, 00:21:13.882 "nvme_iov_md": false 00:21:13.882 }, 00:21:13.882 "memory_domains": [ 00:21:13.882 { 00:21:13.882 "dma_device_id": "system", 00:21:13.882 "dma_device_type": 1 00:21:13.882 }, 00:21:13.882 { 00:21:13.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.882 "dma_device_type": 2 00:21:13.882 } 00:21:13.882 ], 00:21:13.882 "driver_specific": {} 00:21:13.882 } 00:21:13.882 ] 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.882 "name": "Existed_Raid", 00:21:13.882 "uuid": "78ec162a-36c1-4718-8ede-d24b771a9feb", 00:21:13.882 "strip_size_kb": 0, 00:21:13.882 "state": "configuring", 00:21:13.882 "raid_level": "raid1", 00:21:13.882 "superblock": true, 00:21:13.882 "num_base_bdevs": 2, 00:21:13.882 "num_base_bdevs_discovered": 1, 00:21:13.882 "num_base_bdevs_operational": 2, 00:21:13.882 "base_bdevs_list": [ 00:21:13.882 { 00:21:13.882 "name": "BaseBdev1", 00:21:13.882 "uuid": "bcc92693-cfd2-4924-b4ab-aa45cced1dd5", 00:21:13.882 "is_configured": true, 00:21:13.882 "data_offset": 256, 00:21:13.882 "data_size": 7936 00:21:13.882 }, 00:21:13.882 { 00:21:13.882 "name": "BaseBdev2", 00:21:13.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.882 "is_configured": false, 00:21:13.882 "data_offset": 0, 00:21:13.882 "data_size": 0 00:21:13.882 } 00:21:13.882 ] 00:21:13.882 }' 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.882 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.460 [2024-11-15 10:49:44.786555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:14.460 [2024-11-15 10:49:44.786622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.460 [2024-11-15 10:49:44.794577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:14.460 [2024-11-15 10:49:44.796845] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:14.460 [2024-11-15 10:49:44.796912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.460 "name": "Existed_Raid", 00:21:14.460 "uuid": "ffc769ae-a649-435f-821f-7fba421d1cf0", 00:21:14.460 "strip_size_kb": 0, 00:21:14.460 "state": "configuring", 00:21:14.460 "raid_level": "raid1", 00:21:14.460 "superblock": true, 00:21:14.460 "num_base_bdevs": 2, 00:21:14.460 "num_base_bdevs_discovered": 1, 00:21:14.460 "num_base_bdevs_operational": 2, 00:21:14.460 "base_bdevs_list": [ 00:21:14.460 { 00:21:14.460 "name": "BaseBdev1", 00:21:14.460 "uuid": "bcc92693-cfd2-4924-b4ab-aa45cced1dd5", 00:21:14.460 "is_configured": true, 00:21:14.460 "data_offset": 256, 00:21:14.460 "data_size": 7936 00:21:14.460 }, 00:21:14.460 { 00:21:14.460 "name": "BaseBdev2", 00:21:14.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.460 "is_configured": false, 00:21:14.460 "data_offset": 0, 00:21:14.460 "data_size": 0 00:21:14.460 } 00:21:14.460 ] 00:21:14.460 }' 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.460 10:49:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.028 [2024-11-15 10:49:45.320785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:15.028 [2024-11-15 10:49:45.321045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:15.028 [2024-11-15 10:49:45.321066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:15.028 [2024-11-15 10:49:45.321177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:15.028 [2024-11-15 10:49:45.321273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:15.028 [2024-11-15 10:49:45.321292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:15.028 [2024-11-15 10:49:45.321403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.028 BaseBdev2 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.028 [ 00:21:15.028 { 00:21:15.028 "name": "BaseBdev2", 00:21:15.028 "aliases": [ 00:21:15.028 "10a1fcfa-9769-4695-97f5-86d36226fdba" 00:21:15.028 ], 00:21:15.028 "product_name": "Malloc disk", 00:21:15.028 "block_size": 4128, 00:21:15.028 "num_blocks": 8192, 00:21:15.028 "uuid": "10a1fcfa-9769-4695-97f5-86d36226fdba", 00:21:15.028 "md_size": 32, 00:21:15.028 "md_interleave": true, 00:21:15.028 "dif_type": 0, 00:21:15.028 "assigned_rate_limits": { 00:21:15.028 "rw_ios_per_sec": 0, 00:21:15.028 "rw_mbytes_per_sec": 0, 00:21:15.028 "r_mbytes_per_sec": 0, 00:21:15.028 "w_mbytes_per_sec": 0 00:21:15.028 }, 00:21:15.028 "claimed": true, 00:21:15.028 "claim_type": "exclusive_write", 00:21:15.028 "zoned": false, 00:21:15.028 "supported_io_types": { 00:21:15.028 "read": true, 00:21:15.028 "write": true, 00:21:15.028 "unmap": true, 00:21:15.028 "flush": true, 00:21:15.028 "reset": true, 00:21:15.028 "nvme_admin": false, 00:21:15.028 "nvme_io": false, 00:21:15.028 "nvme_io_md": false, 00:21:15.028 "write_zeroes": true, 00:21:15.028 "zcopy": true, 00:21:15.028 "get_zone_info": false, 00:21:15.028 "zone_management": false, 00:21:15.028 "zone_append": false, 00:21:15.028 "compare": false, 00:21:15.028 "compare_and_write": false, 00:21:15.028 "abort": true, 00:21:15.028 "seek_hole": false, 00:21:15.028 "seek_data": false, 00:21:15.028 "copy": true, 00:21:15.028 "nvme_iov_md": false 00:21:15.028 }, 00:21:15.028 "memory_domains": [ 00:21:15.028 { 00:21:15.028 "dma_device_id": "system", 00:21:15.028 "dma_device_type": 1 00:21:15.028 }, 00:21:15.028 { 00:21:15.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.028 "dma_device_type": 2 00:21:15.028 } 00:21:15.028 ], 00:21:15.028 "driver_specific": {} 00:21:15.028 } 00:21:15.028 ] 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.028 "name": "Existed_Raid", 00:21:15.028 "uuid": "ffc769ae-a649-435f-821f-7fba421d1cf0", 00:21:15.028 "strip_size_kb": 0, 00:21:15.028 "state": "online", 00:21:15.028 "raid_level": "raid1", 00:21:15.028 "superblock": true, 00:21:15.028 "num_base_bdevs": 2, 00:21:15.028 "num_base_bdevs_discovered": 2, 00:21:15.028 "num_base_bdevs_operational": 2, 00:21:15.028 "base_bdevs_list": [ 00:21:15.028 { 00:21:15.028 "name": "BaseBdev1", 00:21:15.028 "uuid": "bcc92693-cfd2-4924-b4ab-aa45cced1dd5", 00:21:15.028 "is_configured": true, 00:21:15.028 "data_offset": 256, 00:21:15.028 "data_size": 7936 00:21:15.028 }, 00:21:15.028 { 00:21:15.028 "name": "BaseBdev2", 00:21:15.028 "uuid": "10a1fcfa-9769-4695-97f5-86d36226fdba", 00:21:15.028 "is_configured": true, 00:21:15.028 "data_offset": 256, 00:21:15.028 "data_size": 7936 00:21:15.028 } 00:21:15.028 ] 00:21:15.028 }' 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.028 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.596 [2024-11-15 10:49:45.869402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:15.596 "name": "Existed_Raid", 00:21:15.596 "aliases": [ 00:21:15.596 "ffc769ae-a649-435f-821f-7fba421d1cf0" 00:21:15.596 ], 00:21:15.596 "product_name": "Raid Volume", 00:21:15.596 "block_size": 4128, 00:21:15.596 "num_blocks": 7936, 00:21:15.596 "uuid": "ffc769ae-a649-435f-821f-7fba421d1cf0", 00:21:15.596 "md_size": 32, 00:21:15.596 "md_interleave": true, 00:21:15.596 "dif_type": 0, 00:21:15.596 "assigned_rate_limits": { 00:21:15.596 "rw_ios_per_sec": 0, 00:21:15.596 "rw_mbytes_per_sec": 0, 00:21:15.596 "r_mbytes_per_sec": 0, 00:21:15.596 "w_mbytes_per_sec": 0 00:21:15.596 }, 00:21:15.596 "claimed": false, 00:21:15.596 "zoned": false, 00:21:15.596 "supported_io_types": { 00:21:15.596 "read": true, 00:21:15.596 "write": true, 00:21:15.596 "unmap": false, 00:21:15.596 "flush": false, 00:21:15.596 "reset": true, 00:21:15.596 "nvme_admin": false, 00:21:15.596 "nvme_io": false, 00:21:15.596 "nvme_io_md": false, 00:21:15.596 "write_zeroes": true, 00:21:15.596 "zcopy": false, 00:21:15.596 "get_zone_info": false, 00:21:15.596 "zone_management": false, 00:21:15.596 "zone_append": false, 00:21:15.596 "compare": false, 00:21:15.596 "compare_and_write": false, 00:21:15.596 "abort": false, 00:21:15.596 "seek_hole": false, 00:21:15.596 "seek_data": false, 00:21:15.596 "copy": false, 00:21:15.596 "nvme_iov_md": false 00:21:15.596 }, 00:21:15.596 "memory_domains": [ 00:21:15.596 { 00:21:15.596 "dma_device_id": "system", 00:21:15.596 "dma_device_type": 1 00:21:15.596 }, 00:21:15.596 { 00:21:15.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.596 "dma_device_type": 2 00:21:15.596 }, 00:21:15.596 { 00:21:15.596 "dma_device_id": "system", 00:21:15.596 "dma_device_type": 1 00:21:15.596 }, 00:21:15.596 { 00:21:15.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.596 "dma_device_type": 2 00:21:15.596 } 00:21:15.596 ], 00:21:15.596 "driver_specific": { 00:21:15.596 "raid": { 00:21:15.596 "uuid": "ffc769ae-a649-435f-821f-7fba421d1cf0", 00:21:15.596 "strip_size_kb": 0, 00:21:15.596 "state": "online", 00:21:15.596 "raid_level": "raid1", 00:21:15.596 "superblock": true, 00:21:15.596 "num_base_bdevs": 2, 00:21:15.596 "num_base_bdevs_discovered": 2, 00:21:15.596 "num_base_bdevs_operational": 2, 00:21:15.596 "base_bdevs_list": [ 00:21:15.596 { 00:21:15.596 "name": "BaseBdev1", 00:21:15.596 "uuid": "bcc92693-cfd2-4924-b4ab-aa45cced1dd5", 00:21:15.596 "is_configured": true, 00:21:15.596 "data_offset": 256, 00:21:15.596 "data_size": 7936 00:21:15.596 }, 00:21:15.596 { 00:21:15.596 "name": "BaseBdev2", 00:21:15.596 "uuid": "10a1fcfa-9769-4695-97f5-86d36226fdba", 00:21:15.596 "is_configured": true, 00:21:15.596 "data_offset": 256, 00:21:15.596 "data_size": 7936 00:21:15.596 } 00:21:15.596 ] 00:21:15.596 } 00:21:15.596 } 00:21:15.596 }' 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:15.596 BaseBdev2' 00:21:15.596 10:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:15.596 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.597 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.597 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.597 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.597 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:15.597 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:15.597 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:15.597 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.597 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.597 [2024-11-15 10:49:46.149173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:15.855 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.856 "name": "Existed_Raid", 00:21:15.856 "uuid": "ffc769ae-a649-435f-821f-7fba421d1cf0", 00:21:15.856 "strip_size_kb": 0, 00:21:15.856 "state": "online", 00:21:15.856 "raid_level": "raid1", 00:21:15.856 "superblock": true, 00:21:15.856 "num_base_bdevs": 2, 00:21:15.856 "num_base_bdevs_discovered": 1, 00:21:15.856 "num_base_bdevs_operational": 1, 00:21:15.856 "base_bdevs_list": [ 00:21:15.856 { 00:21:15.856 "name": null, 00:21:15.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.856 "is_configured": false, 00:21:15.856 "data_offset": 0, 00:21:15.856 "data_size": 7936 00:21:15.856 }, 00:21:15.856 { 00:21:15.856 "name": "BaseBdev2", 00:21:15.856 "uuid": "10a1fcfa-9769-4695-97f5-86d36226fdba", 00:21:15.856 "is_configured": true, 00:21:15.856 "data_offset": 256, 00:21:15.856 "data_size": 7936 00:21:15.856 } 00:21:15.856 ] 00:21:15.856 }' 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.856 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.423 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.423 [2024-11-15 10:49:46.820269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.423 [2024-11-15 10:49:46.820677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:16.423 [2024-11-15 10:49:46.902840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:16.424 [2024-11-15 10:49:46.903101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:16.424 [2024-11-15 10:49:46.903151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89083 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89083 ']' 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89083 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.424 10:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89083 00:21:16.682 killing process with pid 89083 00:21:16.682 10:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:16.682 10:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:16.682 10:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89083' 00:21:16.682 10:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89083 00:21:16.682 10:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89083 00:21:16.682 [2024-11-15 10:49:47.001437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:16.682 [2024-11-15 10:49:47.015955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:17.629 ************************************ 00:21:17.629 END TEST raid_state_function_test_sb_md_interleaved 00:21:17.629 ************************************ 00:21:17.629 10:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:21:17.629 00:21:17.629 real 0m5.467s 00:21:17.629 user 0m8.360s 00:21:17.629 sys 0m0.720s 00:21:17.629 10:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:17.629 10:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.629 10:49:48 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:21:17.629 10:49:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:21:17.629 10:49:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:17.629 10:49:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:17.629 ************************************ 00:21:17.629 START TEST raid_superblock_test_md_interleaved 00:21:17.629 ************************************ 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89341 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89341 00:21:17.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89341 ']' 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:17.629 10:49:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.887 [2024-11-15 10:49:48.188176] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:21:17.887 [2024-11-15 10:49:48.188331] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89341 ] 00:21:17.887 [2024-11-15 10:49:48.360804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.145 [2024-11-15 10:49:48.503722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.145 [2024-11-15 10:49:48.684358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:18.145 [2024-11-15 10:49:48.684434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.712 malloc1 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.712 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.971 [2024-11-15 10:49:49.272382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:18.971 [2024-11-15 10:49:49.272460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.971 [2024-11-15 10:49:49.272496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:18.971 [2024-11-15 10:49:49.272512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.971 [2024-11-15 10:49:49.274848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.971 [2024-11-15 10:49:49.274896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:18.971 pt1 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.971 malloc2 00:21:18.971 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.972 [2024-11-15 10:49:49.325216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:18.972 [2024-11-15 10:49:49.325305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.972 [2024-11-15 10:49:49.325340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:18.972 [2024-11-15 10:49:49.325372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.972 [2024-11-15 10:49:49.327743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.972 [2024-11-15 10:49:49.327790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:18.972 pt2 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.972 [2024-11-15 10:49:49.333245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:18.972 [2024-11-15 10:49:49.335566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:18.972 [2024-11-15 10:49:49.335831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:18.972 [2024-11-15 10:49:49.335851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:18.972 [2024-11-15 10:49:49.335965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:18.972 [2024-11-15 10:49:49.336101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:18.972 [2024-11-15 10:49:49.336122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:18.972 [2024-11-15 10:49:49.336229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.972 "name": "raid_bdev1", 00:21:18.972 "uuid": "e6fdab81-63d2-45e7-a090-3c22af1ebdfe", 00:21:18.972 "strip_size_kb": 0, 00:21:18.972 "state": "online", 00:21:18.972 "raid_level": "raid1", 00:21:18.972 "superblock": true, 00:21:18.972 "num_base_bdevs": 2, 00:21:18.972 "num_base_bdevs_discovered": 2, 00:21:18.972 "num_base_bdevs_operational": 2, 00:21:18.972 "base_bdevs_list": [ 00:21:18.972 { 00:21:18.972 "name": "pt1", 00:21:18.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:18.972 "is_configured": true, 00:21:18.972 "data_offset": 256, 00:21:18.972 "data_size": 7936 00:21:18.972 }, 00:21:18.972 { 00:21:18.972 "name": "pt2", 00:21:18.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.972 "is_configured": true, 00:21:18.972 "data_offset": 256, 00:21:18.972 "data_size": 7936 00:21:18.972 } 00:21:18.972 ] 00:21:18.972 }' 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.972 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.538 [2024-11-15 10:49:49.833709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.538 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:19.538 "name": "raid_bdev1", 00:21:19.538 "aliases": [ 00:21:19.538 "e6fdab81-63d2-45e7-a090-3c22af1ebdfe" 00:21:19.538 ], 00:21:19.538 "product_name": "Raid Volume", 00:21:19.538 "block_size": 4128, 00:21:19.538 "num_blocks": 7936, 00:21:19.538 "uuid": "e6fdab81-63d2-45e7-a090-3c22af1ebdfe", 00:21:19.538 "md_size": 32, 00:21:19.538 "md_interleave": true, 00:21:19.538 "dif_type": 0, 00:21:19.538 "assigned_rate_limits": { 00:21:19.538 "rw_ios_per_sec": 0, 00:21:19.538 "rw_mbytes_per_sec": 0, 00:21:19.538 "r_mbytes_per_sec": 0, 00:21:19.538 "w_mbytes_per_sec": 0 00:21:19.538 }, 00:21:19.538 "claimed": false, 00:21:19.538 "zoned": false, 00:21:19.538 "supported_io_types": { 00:21:19.538 "read": true, 00:21:19.538 "write": true, 00:21:19.538 "unmap": false, 00:21:19.538 "flush": false, 00:21:19.538 "reset": true, 00:21:19.538 "nvme_admin": false, 00:21:19.538 "nvme_io": false, 00:21:19.538 "nvme_io_md": false, 00:21:19.538 "write_zeroes": true, 00:21:19.538 "zcopy": false, 00:21:19.538 "get_zone_info": false, 00:21:19.538 "zone_management": false, 00:21:19.538 "zone_append": false, 00:21:19.538 "compare": false, 00:21:19.539 "compare_and_write": false, 00:21:19.539 "abort": false, 00:21:19.539 "seek_hole": false, 00:21:19.539 "seek_data": false, 00:21:19.539 "copy": false, 00:21:19.539 "nvme_iov_md": false 00:21:19.539 }, 00:21:19.539 "memory_domains": [ 00:21:19.539 { 00:21:19.539 "dma_device_id": "system", 00:21:19.539 "dma_device_type": 1 00:21:19.539 }, 00:21:19.539 { 00:21:19.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.539 "dma_device_type": 2 00:21:19.539 }, 00:21:19.539 { 00:21:19.539 "dma_device_id": "system", 00:21:19.539 "dma_device_type": 1 00:21:19.539 }, 00:21:19.539 { 00:21:19.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.539 "dma_device_type": 2 00:21:19.539 } 00:21:19.539 ], 00:21:19.539 "driver_specific": { 00:21:19.539 "raid": { 00:21:19.539 "uuid": "e6fdab81-63d2-45e7-a090-3c22af1ebdfe", 00:21:19.539 "strip_size_kb": 0, 00:21:19.539 "state": "online", 00:21:19.539 "raid_level": "raid1", 00:21:19.539 "superblock": true, 00:21:19.539 "num_base_bdevs": 2, 00:21:19.539 "num_base_bdevs_discovered": 2, 00:21:19.539 "num_base_bdevs_operational": 2, 00:21:19.539 "base_bdevs_list": [ 00:21:19.539 { 00:21:19.539 "name": "pt1", 00:21:19.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:19.539 "is_configured": true, 00:21:19.539 "data_offset": 256, 00:21:19.539 "data_size": 7936 00:21:19.539 }, 00:21:19.539 { 00:21:19.539 "name": "pt2", 00:21:19.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.539 "is_configured": true, 00:21:19.539 "data_offset": 256, 00:21:19.539 "data_size": 7936 00:21:19.539 } 00:21:19.539 ] 00:21:19.539 } 00:21:19.539 } 00:21:19.539 }' 00:21:19.539 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:19.539 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:19.539 pt2' 00:21:19.539 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.539 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:19.539 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.539 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:19.539 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.539 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.539 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.539 10:49:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.539 [2024-11-15 10:49:50.061769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.539 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.797 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e6fdab81-63d2-45e7-a090-3c22af1ebdfe 00:21:19.797 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z e6fdab81-63d2-45e7-a090-3c22af1ebdfe ']' 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.798 [2024-11-15 10:49:50.129400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.798 [2024-11-15 10:49:50.129439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.798 [2024-11-15 10:49:50.129554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.798 [2024-11-15 10:49:50.129632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.798 [2024-11-15 10:49:50.129652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.798 [2024-11-15 10:49:50.265494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:19.798 [2024-11-15 10:49:50.267879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:19.798 [2024-11-15 10:49:50.267992] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:19.798 [2024-11-15 10:49:50.268078] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:19.798 [2024-11-15 10:49:50.268106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.798 [2024-11-15 10:49:50.268122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:19.798 request: 00:21:19.798 { 00:21:19.798 "name": "raid_bdev1", 00:21:19.798 "raid_level": "raid1", 00:21:19.798 "base_bdevs": [ 00:21:19.798 "malloc1", 00:21:19.798 "malloc2" 00:21:19.798 ], 00:21:19.798 "superblock": false, 00:21:19.798 "method": "bdev_raid_create", 00:21:19.798 "req_id": 1 00:21:19.798 } 00:21:19.798 Got JSON-RPC error response 00:21:19.798 response: 00:21:19.798 { 00:21:19.798 "code": -17, 00:21:19.798 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:19.798 } 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.798 [2024-11-15 10:49:50.341497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:19.798 [2024-11-15 10:49:50.341600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.798 [2024-11-15 10:49:50.341628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:19.798 [2024-11-15 10:49:50.341645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.798 [2024-11-15 10:49:50.344052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.798 [2024-11-15 10:49:50.344107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:19.798 [2024-11-15 10:49:50.344187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:19.798 [2024-11-15 10:49:50.344268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:19.798 pt1 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.798 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.056 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.056 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.056 "name": "raid_bdev1", 00:21:20.056 "uuid": "e6fdab81-63d2-45e7-a090-3c22af1ebdfe", 00:21:20.056 "strip_size_kb": 0, 00:21:20.056 "state": "configuring", 00:21:20.056 "raid_level": "raid1", 00:21:20.056 "superblock": true, 00:21:20.056 "num_base_bdevs": 2, 00:21:20.056 "num_base_bdevs_discovered": 1, 00:21:20.056 "num_base_bdevs_operational": 2, 00:21:20.056 "base_bdevs_list": [ 00:21:20.056 { 00:21:20.056 "name": "pt1", 00:21:20.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:20.056 "is_configured": true, 00:21:20.056 "data_offset": 256, 00:21:20.056 "data_size": 7936 00:21:20.056 }, 00:21:20.056 { 00:21:20.056 "name": null, 00:21:20.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:20.056 "is_configured": false, 00:21:20.056 "data_offset": 256, 00:21:20.057 "data_size": 7936 00:21:20.057 } 00:21:20.057 ] 00:21:20.057 }' 00:21:20.057 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.057 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.316 [2024-11-15 10:49:50.797571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:20.316 [2024-11-15 10:49:50.797665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.316 [2024-11-15 10:49:50.797698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:20.316 [2024-11-15 10:49:50.797714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.316 [2024-11-15 10:49:50.797935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.316 [2024-11-15 10:49:50.797976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:20.316 [2024-11-15 10:49:50.798045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:20.316 [2024-11-15 10:49:50.798082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:20.316 [2024-11-15 10:49:50.798198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:20.316 [2024-11-15 10:49:50.798219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:20.316 [2024-11-15 10:49:50.798312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:20.316 [2024-11-15 10:49:50.798427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:20.316 [2024-11-15 10:49:50.798442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:20.316 [2024-11-15 10:49:50.798530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.316 pt2 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.316 "name": "raid_bdev1", 00:21:20.316 "uuid": "e6fdab81-63d2-45e7-a090-3c22af1ebdfe", 00:21:20.316 "strip_size_kb": 0, 00:21:20.316 "state": "online", 00:21:20.316 "raid_level": "raid1", 00:21:20.316 "superblock": true, 00:21:20.316 "num_base_bdevs": 2, 00:21:20.316 "num_base_bdevs_discovered": 2, 00:21:20.316 "num_base_bdevs_operational": 2, 00:21:20.316 "base_bdevs_list": [ 00:21:20.316 { 00:21:20.316 "name": "pt1", 00:21:20.316 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:20.316 "is_configured": true, 00:21:20.316 "data_offset": 256, 00:21:20.316 "data_size": 7936 00:21:20.316 }, 00:21:20.316 { 00:21:20.316 "name": "pt2", 00:21:20.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:20.316 "is_configured": true, 00:21:20.316 "data_offset": 256, 00:21:20.316 "data_size": 7936 00:21:20.316 } 00:21:20.316 ] 00:21:20.316 }' 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.316 10:49:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.882 [2024-11-15 10:49:51.338181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:20.882 "name": "raid_bdev1", 00:21:20.882 "aliases": [ 00:21:20.882 "e6fdab81-63d2-45e7-a090-3c22af1ebdfe" 00:21:20.882 ], 00:21:20.882 "product_name": "Raid Volume", 00:21:20.882 "block_size": 4128, 00:21:20.882 "num_blocks": 7936, 00:21:20.882 "uuid": "e6fdab81-63d2-45e7-a090-3c22af1ebdfe", 00:21:20.882 "md_size": 32, 00:21:20.882 "md_interleave": true, 00:21:20.882 "dif_type": 0, 00:21:20.882 "assigned_rate_limits": { 00:21:20.882 "rw_ios_per_sec": 0, 00:21:20.882 "rw_mbytes_per_sec": 0, 00:21:20.882 "r_mbytes_per_sec": 0, 00:21:20.882 "w_mbytes_per_sec": 0 00:21:20.882 }, 00:21:20.882 "claimed": false, 00:21:20.882 "zoned": false, 00:21:20.882 "supported_io_types": { 00:21:20.882 "read": true, 00:21:20.882 "write": true, 00:21:20.882 "unmap": false, 00:21:20.882 "flush": false, 00:21:20.882 "reset": true, 00:21:20.882 "nvme_admin": false, 00:21:20.882 "nvme_io": false, 00:21:20.882 "nvme_io_md": false, 00:21:20.882 "write_zeroes": true, 00:21:20.882 "zcopy": false, 00:21:20.882 "get_zone_info": false, 00:21:20.882 "zone_management": false, 00:21:20.882 "zone_append": false, 00:21:20.882 "compare": false, 00:21:20.882 "compare_and_write": false, 00:21:20.882 "abort": false, 00:21:20.882 "seek_hole": false, 00:21:20.882 "seek_data": false, 00:21:20.882 "copy": false, 00:21:20.882 "nvme_iov_md": false 00:21:20.882 }, 00:21:20.882 "memory_domains": [ 00:21:20.882 { 00:21:20.882 "dma_device_id": "system", 00:21:20.882 "dma_device_type": 1 00:21:20.882 }, 00:21:20.882 { 00:21:20.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.882 "dma_device_type": 2 00:21:20.882 }, 00:21:20.882 { 00:21:20.882 "dma_device_id": "system", 00:21:20.882 "dma_device_type": 1 00:21:20.882 }, 00:21:20.882 { 00:21:20.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.882 "dma_device_type": 2 00:21:20.882 } 00:21:20.882 ], 00:21:20.882 "driver_specific": { 00:21:20.882 "raid": { 00:21:20.882 "uuid": "e6fdab81-63d2-45e7-a090-3c22af1ebdfe", 00:21:20.882 "strip_size_kb": 0, 00:21:20.882 "state": "online", 00:21:20.882 "raid_level": "raid1", 00:21:20.882 "superblock": true, 00:21:20.882 "num_base_bdevs": 2, 00:21:20.882 "num_base_bdevs_discovered": 2, 00:21:20.882 "num_base_bdevs_operational": 2, 00:21:20.882 "base_bdevs_list": [ 00:21:20.882 { 00:21:20.882 "name": "pt1", 00:21:20.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:20.882 "is_configured": true, 00:21:20.882 "data_offset": 256, 00:21:20.882 "data_size": 7936 00:21:20.882 }, 00:21:20.882 { 00:21:20.882 "name": "pt2", 00:21:20.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:20.882 "is_configured": true, 00:21:20.882 "data_offset": 256, 00:21:20.882 "data_size": 7936 00:21:20.882 } 00:21:20.882 ] 00:21:20.882 } 00:21:20.882 } 00:21:20.882 }' 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:20.882 pt2' 00:21:20.882 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.141 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:21.142 [2024-11-15 10:49:51.614392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' e6fdab81-63d2-45e7-a090-3c22af1ebdfe '!=' e6fdab81-63d2-45e7-a090-3c22af1ebdfe ']' 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.142 [2024-11-15 10:49:51.661882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.142 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.400 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.400 "name": "raid_bdev1", 00:21:21.400 "uuid": "e6fdab81-63d2-45e7-a090-3c22af1ebdfe", 00:21:21.400 "strip_size_kb": 0, 00:21:21.400 "state": "online", 00:21:21.400 "raid_level": "raid1", 00:21:21.400 "superblock": true, 00:21:21.400 "num_base_bdevs": 2, 00:21:21.400 "num_base_bdevs_discovered": 1, 00:21:21.400 "num_base_bdevs_operational": 1, 00:21:21.400 "base_bdevs_list": [ 00:21:21.400 { 00:21:21.400 "name": null, 00:21:21.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.400 "is_configured": false, 00:21:21.400 "data_offset": 0, 00:21:21.400 "data_size": 7936 00:21:21.400 }, 00:21:21.400 { 00:21:21.400 "name": "pt2", 00:21:21.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:21.400 "is_configured": true, 00:21:21.400 "data_offset": 256, 00:21:21.400 "data_size": 7936 00:21:21.400 } 00:21:21.400 ] 00:21:21.400 }' 00:21:21.400 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.400 10:49:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.659 [2024-11-15 10:49:52.133965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:21.659 [2024-11-15 10:49:52.134011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.659 [2024-11-15 10:49:52.134111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.659 [2024-11-15 10:49:52.134181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.659 [2024-11-15 10:49:52.134201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.659 [2024-11-15 10:49:52.202031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:21.659 [2024-11-15 10:49:52.202140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.659 [2024-11-15 10:49:52.202168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:21.659 [2024-11-15 10:49:52.202185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.659 [2024-11-15 10:49:52.204711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.659 [2024-11-15 10:49:52.204776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:21.659 [2024-11-15 10:49:52.204887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:21.659 [2024-11-15 10:49:52.204980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:21.659 [2024-11-15 10:49:52.205084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:21.659 [2024-11-15 10:49:52.205107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:21.659 [2024-11-15 10:49:52.205239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:21.659 [2024-11-15 10:49:52.205336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:21.659 [2024-11-15 10:49:52.205376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:21.659 [2024-11-15 10:49:52.205473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.659 pt2 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.659 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.918 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.918 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.918 "name": "raid_bdev1", 00:21:21.918 "uuid": "e6fdab81-63d2-45e7-a090-3c22af1ebdfe", 00:21:21.918 "strip_size_kb": 0, 00:21:21.918 "state": "online", 00:21:21.918 "raid_level": "raid1", 00:21:21.918 "superblock": true, 00:21:21.918 "num_base_bdevs": 2, 00:21:21.918 "num_base_bdevs_discovered": 1, 00:21:21.918 "num_base_bdevs_operational": 1, 00:21:21.918 "base_bdevs_list": [ 00:21:21.918 { 00:21:21.918 "name": null, 00:21:21.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.918 "is_configured": false, 00:21:21.918 "data_offset": 256, 00:21:21.918 "data_size": 7936 00:21:21.918 }, 00:21:21.918 { 00:21:21.918 "name": "pt2", 00:21:21.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:21.918 "is_configured": true, 00:21:21.918 "data_offset": 256, 00:21:21.918 "data_size": 7936 00:21:21.918 } 00:21:21.918 ] 00:21:21.918 }' 00:21:21.918 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.918 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.177 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:22.177 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.177 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.177 [2024-11-15 10:49:52.718072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:22.177 [2024-11-15 10:49:52.718114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:22.177 [2024-11-15 10:49:52.718208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.177 [2024-11-15 10:49:52.718280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.177 [2024-11-15 10:49:52.718296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:22.177 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.177 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.177 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:22.177 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.177 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.177 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.436 [2024-11-15 10:49:52.786145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:22.436 [2024-11-15 10:49:52.786227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.436 [2024-11-15 10:49:52.786258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:22.436 [2024-11-15 10:49:52.786273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.436 [2024-11-15 10:49:52.788702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.436 [2024-11-15 10:49:52.788749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:22.436 [2024-11-15 10:49:52.788829] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:22.436 [2024-11-15 10:49:52.788896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:22.436 [2024-11-15 10:49:52.789030] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:22.436 [2024-11-15 10:49:52.789047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:22.436 [2024-11-15 10:49:52.789075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:22.436 [2024-11-15 10:49:52.789144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:22.436 [2024-11-15 10:49:52.789255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:22.436 [2024-11-15 10:49:52.789272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:22.436 [2024-11-15 10:49:52.789380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:22.436 [2024-11-15 10:49:52.789466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:22.436 [2024-11-15 10:49:52.789485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:22.436 [2024-11-15 10:49:52.789583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.436 pt1 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.436 "name": "raid_bdev1", 00:21:22.436 "uuid": "e6fdab81-63d2-45e7-a090-3c22af1ebdfe", 00:21:22.436 "strip_size_kb": 0, 00:21:22.436 "state": "online", 00:21:22.436 "raid_level": "raid1", 00:21:22.436 "superblock": true, 00:21:22.436 "num_base_bdevs": 2, 00:21:22.436 "num_base_bdevs_discovered": 1, 00:21:22.436 "num_base_bdevs_operational": 1, 00:21:22.436 "base_bdevs_list": [ 00:21:22.436 { 00:21:22.436 "name": null, 00:21:22.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.436 "is_configured": false, 00:21:22.436 "data_offset": 256, 00:21:22.436 "data_size": 7936 00:21:22.436 }, 00:21:22.436 { 00:21:22.436 "name": "pt2", 00:21:22.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.436 "is_configured": true, 00:21:22.436 "data_offset": 256, 00:21:22.436 "data_size": 7936 00:21:22.436 } 00:21:22.436 ] 00:21:22.436 }' 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.436 10:49:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.003 [2024-11-15 10:49:53.342560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' e6fdab81-63d2-45e7-a090-3c22af1ebdfe '!=' e6fdab81-63d2-45e7-a090-3c22af1ebdfe ']' 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89341 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89341 ']' 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89341 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89341 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:23.003 killing process with pid 89341 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89341' 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 89341 00:21:23.003 [2024-11-15 10:49:53.419677] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:23.003 10:49:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 89341 00:21:23.003 [2024-11-15 10:49:53.419801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:23.003 [2024-11-15 10:49:53.419867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:23.003 [2024-11-15 10:49:53.419889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:23.262 [2024-11-15 10:49:53.595992] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:24.198 10:49:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:21:24.198 00:21:24.199 real 0m6.530s 00:21:24.199 user 0m10.436s 00:21:24.199 sys 0m0.832s 00:21:24.199 10:49:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:24.199 ************************************ 00:21:24.199 END TEST raid_superblock_test_md_interleaved 00:21:24.199 ************************************ 00:21:24.199 10:49:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.199 10:49:54 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:21:24.199 10:49:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:24.199 10:49:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:24.199 10:49:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:24.199 ************************************ 00:21:24.199 START TEST raid_rebuild_test_sb_md_interleaved 00:21:24.199 ************************************ 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89665 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89665 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89665 ']' 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:24.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:24.199 10:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:24.458 [2024-11-15 10:49:54.773794] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:21:24.458 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:24.458 Zero copy mechanism will not be used. 00:21:24.458 [2024-11-15 10:49:54.773968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89665 ] 00:21:24.458 [2024-11-15 10:49:54.950713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.716 [2024-11-15 10:49:55.056630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.716 [2024-11-15 10:49:55.236680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.716 [2024-11-15 10:49:55.236754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.286 BaseBdev1_malloc 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.286 [2024-11-15 10:49:55.806694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:25.286 [2024-11-15 10:49:55.806810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.286 [2024-11-15 10:49:55.806848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:25.286 [2024-11-15 10:49:55.806877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.286 [2024-11-15 10:49:55.809383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.286 [2024-11-15 10:49:55.809465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:25.286 BaseBdev1 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.286 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.546 BaseBdev2_malloc 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.546 [2024-11-15 10:49:55.858534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:25.546 [2024-11-15 10:49:55.858621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.546 [2024-11-15 10:49:55.858648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:25.546 [2024-11-15 10:49:55.858665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.546 [2024-11-15 10:49:55.861067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.546 [2024-11-15 10:49:55.861127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:25.546 BaseBdev2 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.546 spare_malloc 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.546 spare_delay 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.546 [2024-11-15 10:49:55.921383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:25.546 [2024-11-15 10:49:55.921477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.546 [2024-11-15 10:49:55.921513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:25.546 [2024-11-15 10:49:55.921533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.546 [2024-11-15 10:49:55.923975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.546 [2024-11-15 10:49:55.924043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:25.546 spare 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.546 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.546 [2024-11-15 10:49:55.929429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.546 [2024-11-15 10:49:55.931875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:25.546 [2024-11-15 10:49:55.932134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:25.546 [2024-11-15 10:49:55.932157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:25.546 [2024-11-15 10:49:55.932252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:25.546 [2024-11-15 10:49:55.932351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:25.546 [2024-11-15 10:49:55.932404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:25.546 [2024-11-15 10:49:55.932531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.547 "name": "raid_bdev1", 00:21:25.547 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:25.547 "strip_size_kb": 0, 00:21:25.547 "state": "online", 00:21:25.547 "raid_level": "raid1", 00:21:25.547 "superblock": true, 00:21:25.547 "num_base_bdevs": 2, 00:21:25.547 "num_base_bdevs_discovered": 2, 00:21:25.547 "num_base_bdevs_operational": 2, 00:21:25.547 "base_bdevs_list": [ 00:21:25.547 { 00:21:25.547 "name": "BaseBdev1", 00:21:25.547 "uuid": "50ce1715-e613-5d49-ab83-b3e3dbc72327", 00:21:25.547 "is_configured": true, 00:21:25.547 "data_offset": 256, 00:21:25.547 "data_size": 7936 00:21:25.547 }, 00:21:25.547 { 00:21:25.547 "name": "BaseBdev2", 00:21:25.547 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:25.547 "is_configured": true, 00:21:25.547 "data_offset": 256, 00:21:25.547 "data_size": 7936 00:21:25.547 } 00:21:25.547 ] 00:21:25.547 }' 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.547 10:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.115 [2024-11-15 10:49:56.434019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.115 [2024-11-15 10:49:56.525656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.115 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.116 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.116 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.116 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.116 "name": "raid_bdev1", 00:21:26.116 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:26.116 "strip_size_kb": 0, 00:21:26.116 "state": "online", 00:21:26.116 "raid_level": "raid1", 00:21:26.116 "superblock": true, 00:21:26.116 "num_base_bdevs": 2, 00:21:26.116 "num_base_bdevs_discovered": 1, 00:21:26.116 "num_base_bdevs_operational": 1, 00:21:26.116 "base_bdevs_list": [ 00:21:26.116 { 00:21:26.116 "name": null, 00:21:26.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.116 "is_configured": false, 00:21:26.116 "data_offset": 0, 00:21:26.116 "data_size": 7936 00:21:26.116 }, 00:21:26.116 { 00:21:26.116 "name": "BaseBdev2", 00:21:26.116 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:26.116 "is_configured": true, 00:21:26.116 "data_offset": 256, 00:21:26.116 "data_size": 7936 00:21:26.116 } 00:21:26.116 ] 00:21:26.116 }' 00:21:26.116 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.116 10:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.685 10:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:26.685 10:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.685 10:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.685 [2024-11-15 10:49:57.033872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.685 [2024-11-15 10:49:57.050184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:26.685 10:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.685 10:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:26.685 [2024-11-15 10:49:57.052697] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.622 "name": "raid_bdev1", 00:21:27.622 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:27.622 "strip_size_kb": 0, 00:21:27.622 "state": "online", 00:21:27.622 "raid_level": "raid1", 00:21:27.622 "superblock": true, 00:21:27.622 "num_base_bdevs": 2, 00:21:27.622 "num_base_bdevs_discovered": 2, 00:21:27.622 "num_base_bdevs_operational": 2, 00:21:27.622 "process": { 00:21:27.622 "type": "rebuild", 00:21:27.622 "target": "spare", 00:21:27.622 "progress": { 00:21:27.622 "blocks": 2560, 00:21:27.622 "percent": 32 00:21:27.622 } 00:21:27.622 }, 00:21:27.622 "base_bdevs_list": [ 00:21:27.622 { 00:21:27.622 "name": "spare", 00:21:27.622 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:27.622 "is_configured": true, 00:21:27.622 "data_offset": 256, 00:21:27.622 "data_size": 7936 00:21:27.622 }, 00:21:27.622 { 00:21:27.622 "name": "BaseBdev2", 00:21:27.622 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:27.622 "is_configured": true, 00:21:27.622 "data_offset": 256, 00:21:27.622 "data_size": 7936 00:21:27.622 } 00:21:27.622 ] 00:21:27.622 }' 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.622 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.881 [2024-11-15 10:49:58.238176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.881 [2024-11-15 10:49:58.259682] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:27.881 [2024-11-15 10:49:58.259765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.881 [2024-11-15 10:49:58.259789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.881 [2024-11-15 10:49:58.259806] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.881 "name": "raid_bdev1", 00:21:27.881 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:27.881 "strip_size_kb": 0, 00:21:27.881 "state": "online", 00:21:27.881 "raid_level": "raid1", 00:21:27.881 "superblock": true, 00:21:27.881 "num_base_bdevs": 2, 00:21:27.881 "num_base_bdevs_discovered": 1, 00:21:27.881 "num_base_bdevs_operational": 1, 00:21:27.881 "base_bdevs_list": [ 00:21:27.881 { 00:21:27.881 "name": null, 00:21:27.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.881 "is_configured": false, 00:21:27.881 "data_offset": 0, 00:21:27.881 "data_size": 7936 00:21:27.881 }, 00:21:27.881 { 00:21:27.881 "name": "BaseBdev2", 00:21:27.881 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:27.881 "is_configured": true, 00:21:27.881 "data_offset": 256, 00:21:27.881 "data_size": 7936 00:21:27.881 } 00:21:27.881 ] 00:21:27.881 }' 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.881 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.448 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:28.449 "name": "raid_bdev1", 00:21:28.449 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:28.449 "strip_size_kb": 0, 00:21:28.449 "state": "online", 00:21:28.449 "raid_level": "raid1", 00:21:28.449 "superblock": true, 00:21:28.449 "num_base_bdevs": 2, 00:21:28.449 "num_base_bdevs_discovered": 1, 00:21:28.449 "num_base_bdevs_operational": 1, 00:21:28.449 "base_bdevs_list": [ 00:21:28.449 { 00:21:28.449 "name": null, 00:21:28.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.449 "is_configured": false, 00:21:28.449 "data_offset": 0, 00:21:28.449 "data_size": 7936 00:21:28.449 }, 00:21:28.449 { 00:21:28.449 "name": "BaseBdev2", 00:21:28.449 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:28.449 "is_configured": true, 00:21:28.449 "data_offset": 256, 00:21:28.449 "data_size": 7936 00:21:28.449 } 00:21:28.449 ] 00:21:28.449 }' 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.449 10:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.449 [2024-11-15 10:49:58.996237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.707 [2024-11-15 10:49:59.011105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:28.707 10:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.707 10:49:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:28.707 [2024-11-15 10:49:59.013447] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.644 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:29.644 "name": "raid_bdev1", 00:21:29.644 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:29.644 "strip_size_kb": 0, 00:21:29.644 "state": "online", 00:21:29.644 "raid_level": "raid1", 00:21:29.644 "superblock": true, 00:21:29.644 "num_base_bdevs": 2, 00:21:29.644 "num_base_bdevs_discovered": 2, 00:21:29.644 "num_base_bdevs_operational": 2, 00:21:29.644 "process": { 00:21:29.644 "type": "rebuild", 00:21:29.644 "target": "spare", 00:21:29.644 "progress": { 00:21:29.644 "blocks": 2560, 00:21:29.644 "percent": 32 00:21:29.644 } 00:21:29.644 }, 00:21:29.644 "base_bdevs_list": [ 00:21:29.644 { 00:21:29.644 "name": "spare", 00:21:29.644 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:29.644 "is_configured": true, 00:21:29.644 "data_offset": 256, 00:21:29.644 "data_size": 7936 00:21:29.644 }, 00:21:29.644 { 00:21:29.644 "name": "BaseBdev2", 00:21:29.644 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:29.644 "is_configured": true, 00:21:29.644 "data_offset": 256, 00:21:29.644 "data_size": 7936 00:21:29.644 } 00:21:29.644 ] 00:21:29.644 }' 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:29.645 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=794 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.645 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.902 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.902 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:29.902 "name": "raid_bdev1", 00:21:29.903 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:29.903 "strip_size_kb": 0, 00:21:29.903 "state": "online", 00:21:29.903 "raid_level": "raid1", 00:21:29.903 "superblock": true, 00:21:29.903 "num_base_bdevs": 2, 00:21:29.903 "num_base_bdevs_discovered": 2, 00:21:29.903 "num_base_bdevs_operational": 2, 00:21:29.903 "process": { 00:21:29.903 "type": "rebuild", 00:21:29.903 "target": "spare", 00:21:29.903 "progress": { 00:21:29.903 "blocks": 2816, 00:21:29.903 "percent": 35 00:21:29.903 } 00:21:29.903 }, 00:21:29.903 "base_bdevs_list": [ 00:21:29.903 { 00:21:29.903 "name": "spare", 00:21:29.903 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:29.903 "is_configured": true, 00:21:29.903 "data_offset": 256, 00:21:29.903 "data_size": 7936 00:21:29.903 }, 00:21:29.903 { 00:21:29.903 "name": "BaseBdev2", 00:21:29.903 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:29.903 "is_configured": true, 00:21:29.903 "data_offset": 256, 00:21:29.903 "data_size": 7936 00:21:29.903 } 00:21:29.903 ] 00:21:29.903 }' 00:21:29.903 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.903 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.903 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.903 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.903 10:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.839 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.098 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.098 "name": "raid_bdev1", 00:21:31.098 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:31.098 "strip_size_kb": 0, 00:21:31.098 "state": "online", 00:21:31.098 "raid_level": "raid1", 00:21:31.098 "superblock": true, 00:21:31.098 "num_base_bdevs": 2, 00:21:31.098 "num_base_bdevs_discovered": 2, 00:21:31.098 "num_base_bdevs_operational": 2, 00:21:31.098 "process": { 00:21:31.098 "type": "rebuild", 00:21:31.098 "target": "spare", 00:21:31.098 "progress": { 00:21:31.098 "blocks": 5888, 00:21:31.098 "percent": 74 00:21:31.098 } 00:21:31.098 }, 00:21:31.098 "base_bdevs_list": [ 00:21:31.098 { 00:21:31.098 "name": "spare", 00:21:31.098 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:31.098 "is_configured": true, 00:21:31.098 "data_offset": 256, 00:21:31.098 "data_size": 7936 00:21:31.098 }, 00:21:31.098 { 00:21:31.098 "name": "BaseBdev2", 00:21:31.098 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:31.098 "is_configured": true, 00:21:31.098 "data_offset": 256, 00:21:31.098 "data_size": 7936 00:21:31.098 } 00:21:31.098 ] 00:21:31.098 }' 00:21:31.098 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.098 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.098 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.098 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.099 10:50:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:31.667 [2024-11-15 10:50:02.132181] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:31.667 [2024-11-15 10:50:02.132279] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:31.667 [2024-11-15 10:50:02.132505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.234 "name": "raid_bdev1", 00:21:32.234 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:32.234 "strip_size_kb": 0, 00:21:32.234 "state": "online", 00:21:32.234 "raid_level": "raid1", 00:21:32.234 "superblock": true, 00:21:32.234 "num_base_bdevs": 2, 00:21:32.234 "num_base_bdevs_discovered": 2, 00:21:32.234 "num_base_bdevs_operational": 2, 00:21:32.234 "base_bdevs_list": [ 00:21:32.234 { 00:21:32.234 "name": "spare", 00:21:32.234 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:32.234 "is_configured": true, 00:21:32.234 "data_offset": 256, 00:21:32.234 "data_size": 7936 00:21:32.234 }, 00:21:32.234 { 00:21:32.234 "name": "BaseBdev2", 00:21:32.234 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:32.234 "is_configured": true, 00:21:32.234 "data_offset": 256, 00:21:32.234 "data_size": 7936 00:21:32.234 } 00:21:32.234 ] 00:21:32.234 }' 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.234 "name": "raid_bdev1", 00:21:32.234 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:32.234 "strip_size_kb": 0, 00:21:32.234 "state": "online", 00:21:32.234 "raid_level": "raid1", 00:21:32.234 "superblock": true, 00:21:32.234 "num_base_bdevs": 2, 00:21:32.234 "num_base_bdevs_discovered": 2, 00:21:32.234 "num_base_bdevs_operational": 2, 00:21:32.234 "base_bdevs_list": [ 00:21:32.234 { 00:21:32.234 "name": "spare", 00:21:32.234 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:32.234 "is_configured": true, 00:21:32.234 "data_offset": 256, 00:21:32.234 "data_size": 7936 00:21:32.234 }, 00:21:32.234 { 00:21:32.234 "name": "BaseBdev2", 00:21:32.234 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:32.234 "is_configured": true, 00:21:32.234 "data_offset": 256, 00:21:32.234 "data_size": 7936 00:21:32.234 } 00:21:32.234 ] 00:21:32.234 }' 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:32.234 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.493 "name": "raid_bdev1", 00:21:32.493 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:32.493 "strip_size_kb": 0, 00:21:32.493 "state": "online", 00:21:32.493 "raid_level": "raid1", 00:21:32.493 "superblock": true, 00:21:32.493 "num_base_bdevs": 2, 00:21:32.493 "num_base_bdevs_discovered": 2, 00:21:32.493 "num_base_bdevs_operational": 2, 00:21:32.493 "base_bdevs_list": [ 00:21:32.493 { 00:21:32.493 "name": "spare", 00:21:32.493 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:32.493 "is_configured": true, 00:21:32.493 "data_offset": 256, 00:21:32.493 "data_size": 7936 00:21:32.493 }, 00:21:32.493 { 00:21:32.493 "name": "BaseBdev2", 00:21:32.493 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:32.493 "is_configured": true, 00:21:32.493 "data_offset": 256, 00:21:32.493 "data_size": 7936 00:21:32.493 } 00:21:32.493 ] 00:21:32.493 }' 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.493 10:50:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.061 [2024-11-15 10:50:03.348882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:33.061 [2024-11-15 10:50:03.349057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:33.061 [2024-11-15 10:50:03.349283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:33.061 [2024-11-15 10:50:03.349532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:33.061 [2024-11-15 10:50:03.349700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.061 [2024-11-15 10:50:03.416864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:33.061 [2024-11-15 10:50:03.416932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.061 [2024-11-15 10:50:03.416966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:33.061 [2024-11-15 10:50:03.416981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.061 [2024-11-15 10:50:03.419404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.061 [2024-11-15 10:50:03.419579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:33.061 [2024-11-15 10:50:03.419671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:33.061 [2024-11-15 10:50:03.419736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:33.061 [2024-11-15 10:50:03.419884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:33.061 spare 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.061 [2024-11-15 10:50:03.519999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:33.061 [2024-11-15 10:50:03.520038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:33.061 [2024-11-15 10:50:03.520158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:33.061 [2024-11-15 10:50:03.520271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:33.061 [2024-11-15 10:50:03.520289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:33.061 [2024-11-15 10:50:03.520429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.061 "name": "raid_bdev1", 00:21:33.061 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:33.061 "strip_size_kb": 0, 00:21:33.061 "state": "online", 00:21:33.061 "raid_level": "raid1", 00:21:33.061 "superblock": true, 00:21:33.061 "num_base_bdevs": 2, 00:21:33.061 "num_base_bdevs_discovered": 2, 00:21:33.061 "num_base_bdevs_operational": 2, 00:21:33.061 "base_bdevs_list": [ 00:21:33.061 { 00:21:33.061 "name": "spare", 00:21:33.061 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:33.061 "is_configured": true, 00:21:33.061 "data_offset": 256, 00:21:33.061 "data_size": 7936 00:21:33.061 }, 00:21:33.061 { 00:21:33.061 "name": "BaseBdev2", 00:21:33.061 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:33.061 "is_configured": true, 00:21:33.061 "data_offset": 256, 00:21:33.061 "data_size": 7936 00:21:33.061 } 00:21:33.061 ] 00:21:33.061 }' 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.061 10:50:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.629 "name": "raid_bdev1", 00:21:33.629 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:33.629 "strip_size_kb": 0, 00:21:33.629 "state": "online", 00:21:33.629 "raid_level": "raid1", 00:21:33.629 "superblock": true, 00:21:33.629 "num_base_bdevs": 2, 00:21:33.629 "num_base_bdevs_discovered": 2, 00:21:33.629 "num_base_bdevs_operational": 2, 00:21:33.629 "base_bdevs_list": [ 00:21:33.629 { 00:21:33.629 "name": "spare", 00:21:33.629 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:33.629 "is_configured": true, 00:21:33.629 "data_offset": 256, 00:21:33.629 "data_size": 7936 00:21:33.629 }, 00:21:33.629 { 00:21:33.629 "name": "BaseBdev2", 00:21:33.629 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:33.629 "is_configured": true, 00:21:33.629 "data_offset": 256, 00:21:33.629 "data_size": 7936 00:21:33.629 } 00:21:33.629 ] 00:21:33.629 }' 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.629 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 [2024-11-15 10:50:04.233220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.888 "name": "raid_bdev1", 00:21:33.888 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:33.888 "strip_size_kb": 0, 00:21:33.888 "state": "online", 00:21:33.888 "raid_level": "raid1", 00:21:33.888 "superblock": true, 00:21:33.888 "num_base_bdevs": 2, 00:21:33.888 "num_base_bdevs_discovered": 1, 00:21:33.888 "num_base_bdevs_operational": 1, 00:21:33.888 "base_bdevs_list": [ 00:21:33.888 { 00:21:33.888 "name": null, 00:21:33.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.888 "is_configured": false, 00:21:33.888 "data_offset": 0, 00:21:33.888 "data_size": 7936 00:21:33.888 }, 00:21:33.888 { 00:21:33.888 "name": "BaseBdev2", 00:21:33.888 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:33.888 "is_configured": true, 00:21:33.888 "data_offset": 256, 00:21:33.888 "data_size": 7936 00:21:33.888 } 00:21:33.888 ] 00:21:33.888 }' 00:21:33.888 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.889 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.455 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:34.455 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.455 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.455 [2024-11-15 10:50:04.745391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:34.455 [2024-11-15 10:50:04.745789] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:34.455 [2024-11-15 10:50:04.745980] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:34.455 [2024-11-15 10:50:04.746189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:34.455 [2024-11-15 10:50:04.760923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:34.455 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.455 10:50:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:34.455 [2024-11-15 10:50:04.763510] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.391 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.391 "name": "raid_bdev1", 00:21:35.391 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:35.391 "strip_size_kb": 0, 00:21:35.391 "state": "online", 00:21:35.391 "raid_level": "raid1", 00:21:35.391 "superblock": true, 00:21:35.391 "num_base_bdevs": 2, 00:21:35.391 "num_base_bdevs_discovered": 2, 00:21:35.391 "num_base_bdevs_operational": 2, 00:21:35.391 "process": { 00:21:35.391 "type": "rebuild", 00:21:35.391 "target": "spare", 00:21:35.391 "progress": { 00:21:35.391 "blocks": 2560, 00:21:35.391 "percent": 32 00:21:35.391 } 00:21:35.391 }, 00:21:35.391 "base_bdevs_list": [ 00:21:35.391 { 00:21:35.391 "name": "spare", 00:21:35.391 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:35.391 "is_configured": true, 00:21:35.391 "data_offset": 256, 00:21:35.391 "data_size": 7936 00:21:35.391 }, 00:21:35.391 { 00:21:35.391 "name": "BaseBdev2", 00:21:35.391 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:35.391 "is_configured": true, 00:21:35.391 "data_offset": 256, 00:21:35.391 "data_size": 7936 00:21:35.391 } 00:21:35.391 ] 00:21:35.391 }' 00:21:35.392 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.392 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.392 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.392 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.392 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:35.392 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.392 10:50:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.392 [2024-11-15 10:50:05.921050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:35.651 [2024-11-15 10:50:05.970550] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:35.651 [2024-11-15 10:50:05.970881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.651 [2024-11-15 10:50:05.971032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:35.651 [2024-11-15 10:50:05.971091] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.651 "name": "raid_bdev1", 00:21:35.651 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:35.651 "strip_size_kb": 0, 00:21:35.651 "state": "online", 00:21:35.651 "raid_level": "raid1", 00:21:35.651 "superblock": true, 00:21:35.651 "num_base_bdevs": 2, 00:21:35.651 "num_base_bdevs_discovered": 1, 00:21:35.651 "num_base_bdevs_operational": 1, 00:21:35.651 "base_bdevs_list": [ 00:21:35.651 { 00:21:35.651 "name": null, 00:21:35.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.651 "is_configured": false, 00:21:35.651 "data_offset": 0, 00:21:35.651 "data_size": 7936 00:21:35.651 }, 00:21:35.651 { 00:21:35.651 "name": "BaseBdev2", 00:21:35.651 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:35.651 "is_configured": true, 00:21:35.651 "data_offset": 256, 00:21:35.651 "data_size": 7936 00:21:35.651 } 00:21:35.651 ] 00:21:35.651 }' 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.651 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.219 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:36.219 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.219 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.219 [2024-11-15 10:50:06.518753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:36.219 [2024-11-15 10:50:06.518848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.219 [2024-11-15 10:50:06.518889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:36.219 [2024-11-15 10:50:06.518907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.219 [2024-11-15 10:50:06.519158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.219 [2024-11-15 10:50:06.519187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:36.219 [2024-11-15 10:50:06.519262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:36.219 [2024-11-15 10:50:06.519287] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:36.219 [2024-11-15 10:50:06.519301] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:36.219 [2024-11-15 10:50:06.519342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:36.219 [2024-11-15 10:50:06.533495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:36.219 spare 00:21:36.219 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.219 10:50:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:36.219 [2024-11-15 10:50:06.535793] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.154 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:37.154 "name": "raid_bdev1", 00:21:37.154 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:37.154 "strip_size_kb": 0, 00:21:37.154 "state": "online", 00:21:37.154 "raid_level": "raid1", 00:21:37.154 "superblock": true, 00:21:37.154 "num_base_bdevs": 2, 00:21:37.154 "num_base_bdevs_discovered": 2, 00:21:37.154 "num_base_bdevs_operational": 2, 00:21:37.154 "process": { 00:21:37.154 "type": "rebuild", 00:21:37.154 "target": "spare", 00:21:37.154 "progress": { 00:21:37.154 "blocks": 2560, 00:21:37.154 "percent": 32 00:21:37.154 } 00:21:37.154 }, 00:21:37.154 "base_bdevs_list": [ 00:21:37.155 { 00:21:37.155 "name": "spare", 00:21:37.155 "uuid": "1427cb96-e640-5438-a2e9-c3f0b8f3298a", 00:21:37.155 "is_configured": true, 00:21:37.155 "data_offset": 256, 00:21:37.155 "data_size": 7936 00:21:37.155 }, 00:21:37.155 { 00:21:37.155 "name": "BaseBdev2", 00:21:37.155 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:37.155 "is_configured": true, 00:21:37.155 "data_offset": 256, 00:21:37.155 "data_size": 7936 00:21:37.155 } 00:21:37.155 ] 00:21:37.155 }' 00:21:37.155 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:37.155 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.155 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:37.155 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.155 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:37.155 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.155 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.155 [2024-11-15 10:50:07.697770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:37.414 [2024-11-15 10:50:07.742661] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:37.414 [2024-11-15 10:50:07.742915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.414 [2024-11-15 10:50:07.743090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:37.414 [2024-11-15 10:50:07.743144] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.414 "name": "raid_bdev1", 00:21:37.414 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:37.414 "strip_size_kb": 0, 00:21:37.414 "state": "online", 00:21:37.414 "raid_level": "raid1", 00:21:37.414 "superblock": true, 00:21:37.414 "num_base_bdevs": 2, 00:21:37.414 "num_base_bdevs_discovered": 1, 00:21:37.414 "num_base_bdevs_operational": 1, 00:21:37.414 "base_bdevs_list": [ 00:21:37.414 { 00:21:37.414 "name": null, 00:21:37.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.414 "is_configured": false, 00:21:37.414 "data_offset": 0, 00:21:37.414 "data_size": 7936 00:21:37.414 }, 00:21:37.414 { 00:21:37.414 "name": "BaseBdev2", 00:21:37.414 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:37.414 "is_configured": true, 00:21:37.414 "data_offset": 256, 00:21:37.414 "data_size": 7936 00:21:37.414 } 00:21:37.414 ] 00:21:37.414 }' 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.414 10:50:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.009 "name": "raid_bdev1", 00:21:38.009 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:38.009 "strip_size_kb": 0, 00:21:38.009 "state": "online", 00:21:38.009 "raid_level": "raid1", 00:21:38.009 "superblock": true, 00:21:38.009 "num_base_bdevs": 2, 00:21:38.009 "num_base_bdevs_discovered": 1, 00:21:38.009 "num_base_bdevs_operational": 1, 00:21:38.009 "base_bdevs_list": [ 00:21:38.009 { 00:21:38.009 "name": null, 00:21:38.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.009 "is_configured": false, 00:21:38.009 "data_offset": 0, 00:21:38.009 "data_size": 7936 00:21:38.009 }, 00:21:38.009 { 00:21:38.009 "name": "BaseBdev2", 00:21:38.009 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:38.009 "is_configured": true, 00:21:38.009 "data_offset": 256, 00:21:38.009 "data_size": 7936 00:21:38.009 } 00:21:38.009 ] 00:21:38.009 }' 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 [2024-11-15 10:50:08.450207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:38.009 [2024-11-15 10:50:08.450280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.009 [2024-11-15 10:50:08.450328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:38.009 [2024-11-15 10:50:08.450344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.009 [2024-11-15 10:50:08.450600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.009 [2024-11-15 10:50:08.450625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:38.009 [2024-11-15 10:50:08.450690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:38.009 [2024-11-15 10:50:08.450763] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:38.009 [2024-11-15 10:50:08.450784] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:38.009 [2024-11-15 10:50:08.450798] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:38.009 BaseBdev1 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.009 10:50:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.944 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.203 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.203 "name": "raid_bdev1", 00:21:39.203 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:39.203 "strip_size_kb": 0, 00:21:39.203 "state": "online", 00:21:39.203 "raid_level": "raid1", 00:21:39.203 "superblock": true, 00:21:39.203 "num_base_bdevs": 2, 00:21:39.203 "num_base_bdevs_discovered": 1, 00:21:39.203 "num_base_bdevs_operational": 1, 00:21:39.203 "base_bdevs_list": [ 00:21:39.203 { 00:21:39.203 "name": null, 00:21:39.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.203 "is_configured": false, 00:21:39.203 "data_offset": 0, 00:21:39.203 "data_size": 7936 00:21:39.203 }, 00:21:39.203 { 00:21:39.203 "name": "BaseBdev2", 00:21:39.203 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:39.203 "is_configured": true, 00:21:39.203 "data_offset": 256, 00:21:39.203 "data_size": 7936 00:21:39.203 } 00:21:39.203 ] 00:21:39.203 }' 00:21:39.203 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.203 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.462 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:39.462 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.462 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:39.462 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:39.462 10:50:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.462 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.462 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.462 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.462 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.462 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.719 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.719 "name": "raid_bdev1", 00:21:39.719 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:39.719 "strip_size_kb": 0, 00:21:39.719 "state": "online", 00:21:39.719 "raid_level": "raid1", 00:21:39.719 "superblock": true, 00:21:39.719 "num_base_bdevs": 2, 00:21:39.719 "num_base_bdevs_discovered": 1, 00:21:39.719 "num_base_bdevs_operational": 1, 00:21:39.720 "base_bdevs_list": [ 00:21:39.720 { 00:21:39.720 "name": null, 00:21:39.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.720 "is_configured": false, 00:21:39.720 "data_offset": 0, 00:21:39.720 "data_size": 7936 00:21:39.720 }, 00:21:39.720 { 00:21:39.720 "name": "BaseBdev2", 00:21:39.720 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:39.720 "is_configured": true, 00:21:39.720 "data_offset": 256, 00:21:39.720 "data_size": 7936 00:21:39.720 } 00:21:39.720 ] 00:21:39.720 }' 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.720 [2024-11-15 10:50:10.154857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:39.720 [2024-11-15 10:50:10.155093] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:39.720 [2024-11-15 10:50:10.155122] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:39.720 request: 00:21:39.720 { 00:21:39.720 "base_bdev": "BaseBdev1", 00:21:39.720 "raid_bdev": "raid_bdev1", 00:21:39.720 "method": "bdev_raid_add_base_bdev", 00:21:39.720 "req_id": 1 00:21:39.720 } 00:21:39.720 Got JSON-RPC error response 00:21:39.720 response: 00:21:39.720 { 00:21:39.720 "code": -22, 00:21:39.720 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:39.720 } 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.720 10:50:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.654 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.655 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:40.655 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.916 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.916 "name": "raid_bdev1", 00:21:40.916 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:40.916 "strip_size_kb": 0, 00:21:40.916 "state": "online", 00:21:40.916 "raid_level": "raid1", 00:21:40.916 "superblock": true, 00:21:40.916 "num_base_bdevs": 2, 00:21:40.916 "num_base_bdevs_discovered": 1, 00:21:40.916 "num_base_bdevs_operational": 1, 00:21:40.916 "base_bdevs_list": [ 00:21:40.916 { 00:21:40.916 "name": null, 00:21:40.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.916 "is_configured": false, 00:21:40.916 "data_offset": 0, 00:21:40.916 "data_size": 7936 00:21:40.916 }, 00:21:40.916 { 00:21:40.916 "name": "BaseBdev2", 00:21:40.916 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:40.916 "is_configured": true, 00:21:40.916 "data_offset": 256, 00:21:40.916 "data_size": 7936 00:21:40.916 } 00:21:40.916 ] 00:21:40.916 }' 00:21:40.916 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.916 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:41.174 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:41.174 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.174 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:41.174 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:41.174 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.174 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.174 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.174 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:41.174 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.174 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.434 "name": "raid_bdev1", 00:21:41.434 "uuid": "356a0188-435f-418e-a9c2-8c030f3d71db", 00:21:41.434 "strip_size_kb": 0, 00:21:41.434 "state": "online", 00:21:41.434 "raid_level": "raid1", 00:21:41.434 "superblock": true, 00:21:41.434 "num_base_bdevs": 2, 00:21:41.434 "num_base_bdevs_discovered": 1, 00:21:41.434 "num_base_bdevs_operational": 1, 00:21:41.434 "base_bdevs_list": [ 00:21:41.434 { 00:21:41.434 "name": null, 00:21:41.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.434 "is_configured": false, 00:21:41.434 "data_offset": 0, 00:21:41.434 "data_size": 7936 00:21:41.434 }, 00:21:41.434 { 00:21:41.434 "name": "BaseBdev2", 00:21:41.434 "uuid": "5f01725b-1703-5c9d-b711-33d3c1c1c1fe", 00:21:41.434 "is_configured": true, 00:21:41.434 "data_offset": 256, 00:21:41.434 "data_size": 7936 00:21:41.434 } 00:21:41.434 ] 00:21:41.434 }' 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89665 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89665 ']' 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89665 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89665 00:21:41.434 killing process with pid 89665 00:21:41.434 Received shutdown signal, test time was about 60.000000 seconds 00:21:41.434 00:21:41.434 Latency(us) 00:21:41.434 [2024-11-15T10:50:11.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.434 [2024-11-15T10:50:11.994Z] =================================================================================================================== 00:21:41.434 [2024-11-15T10:50:11.994Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89665' 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89665 00:21:41.434 [2024-11-15 10:50:11.907023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:41.434 10:50:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89665 00:21:41.434 [2024-11-15 10:50:11.907177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:41.434 [2024-11-15 10:50:11.907241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:41.434 [2024-11-15 10:50:11.907259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:41.693 [2024-11-15 10:50:12.157615] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:42.630 10:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:21:42.630 00:21:42.630 real 0m18.460s 00:21:42.630 user 0m25.389s 00:21:42.630 sys 0m1.263s 00:21:42.630 10:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:42.630 ************************************ 00:21:42.630 END TEST raid_rebuild_test_sb_md_interleaved 00:21:42.630 ************************************ 00:21:42.630 10:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.630 10:50:13 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:21:42.630 10:50:13 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:21:42.630 10:50:13 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89665 ']' 00:21:42.630 10:50:13 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89665 00:21:42.890 10:50:13 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:21:42.890 ************************************ 00:21:42.890 END TEST bdev_raid 00:21:42.890 ************************************ 00:21:42.890 00:21:42.890 real 12m57.123s 00:21:42.890 user 18m31.099s 00:21:42.890 sys 1m35.363s 00:21:42.890 10:50:13 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:42.890 10:50:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:42.890 10:50:13 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:42.890 10:50:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:42.890 10:50:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:42.890 10:50:13 -- common/autotest_common.sh@10 -- # set +x 00:21:42.890 ************************************ 00:21:42.890 START TEST spdkcli_raid 00:21:42.890 ************************************ 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:42.890 * Looking for test storage... 00:21:42.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.890 10:50:13 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:42.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.890 --rc genhtml_branch_coverage=1 00:21:42.890 --rc genhtml_function_coverage=1 00:21:42.890 --rc genhtml_legend=1 00:21:42.890 --rc geninfo_all_blocks=1 00:21:42.890 --rc geninfo_unexecuted_blocks=1 00:21:42.890 00:21:42.890 ' 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:42.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.890 --rc genhtml_branch_coverage=1 00:21:42.890 --rc genhtml_function_coverage=1 00:21:42.890 --rc genhtml_legend=1 00:21:42.890 --rc geninfo_all_blocks=1 00:21:42.890 --rc geninfo_unexecuted_blocks=1 00:21:42.890 00:21:42.890 ' 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:42.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.890 --rc genhtml_branch_coverage=1 00:21:42.890 --rc genhtml_function_coverage=1 00:21:42.890 --rc genhtml_legend=1 00:21:42.890 --rc geninfo_all_blocks=1 00:21:42.890 --rc geninfo_unexecuted_blocks=1 00:21:42.890 00:21:42.890 ' 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:42.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.890 --rc genhtml_branch_coverage=1 00:21:42.890 --rc genhtml_function_coverage=1 00:21:42.890 --rc genhtml_legend=1 00:21:42.890 --rc geninfo_all_blocks=1 00:21:42.890 --rc geninfo_unexecuted_blocks=1 00:21:42.890 00:21:42.890 ' 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:42.890 10:50:13 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.890 10:50:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:42.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90342 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:42.890 10:50:13 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90342 00:21:42.891 10:50:13 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 90342 ']' 00:21:42.891 10:50:13 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.891 10:50:13 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:42.891 10:50:13 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.891 10:50:13 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:42.891 10:50:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:43.148 [2024-11-15 10:50:13.542919] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:21:43.148 [2024-11-15 10:50:13.543097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90342 ] 00:21:43.406 [2024-11-15 10:50:13.722462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:43.406 [2024-11-15 10:50:13.857478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.406 [2024-11-15 10:50:13.857491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.344 10:50:14 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:44.344 10:50:14 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:21:44.344 10:50:14 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:21:44.344 10:50:14 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.344 10:50:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.344 10:50:14 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:21:44.344 10:50:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:44.344 10:50:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.344 10:50:14 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:44.344 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:44.344 ' 00:21:46.249 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:21:46.249 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:21:46.249 10:50:16 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:21:46.249 10:50:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:46.249 10:50:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:46.249 10:50:16 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:21:46.249 10:50:16 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:46.249 10:50:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:46.249 10:50:16 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:21:46.249 ' 00:21:47.185 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:21:47.185 10:50:17 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:21:47.185 10:50:17 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:47.185 10:50:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.185 10:50:17 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:21:47.186 10:50:17 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:47.186 10:50:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.186 10:50:17 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:21:47.186 10:50:17 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:21:48.121 10:50:18 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:21:48.121 10:50:18 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:21:48.122 10:50:18 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:21:48.122 10:50:18 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:48.122 10:50:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:48.122 10:50:18 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:21:48.122 10:50:18 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.122 10:50:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:48.122 10:50:18 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:21:48.122 ' 00:21:49.058 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:21:49.058 10:50:19 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:21:49.058 10:50:19 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.058 10:50:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:49.058 10:50:19 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:21:49.058 10:50:19 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:49.058 10:50:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:49.058 10:50:19 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:21:49.058 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:21:49.058 ' 00:21:50.435 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:21:50.435 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:21:50.692 10:50:21 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:21:50.692 10:50:21 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:50.692 10:50:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:50.692 10:50:21 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90342 00:21:50.692 10:50:21 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90342 ']' 00:21:50.693 10:50:21 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90342 00:21:50.693 10:50:21 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:21:50.693 10:50:21 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:50.693 10:50:21 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90342 00:21:50.693 killing process with pid 90342 00:21:50.693 10:50:21 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:50.693 10:50:21 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:50.693 10:50:21 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90342' 00:21:50.693 10:50:21 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 90342 00:21:50.693 10:50:21 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 90342 00:21:53.228 10:50:23 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:21:53.228 10:50:23 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90342 ']' 00:21:53.228 10:50:23 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90342 00:21:53.228 10:50:23 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90342 ']' 00:21:53.228 10:50:23 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90342 00:21:53.228 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (90342) - No such process 00:21:53.228 Process with pid 90342 is not found 00:21:53.228 10:50:23 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 90342 is not found' 00:21:53.228 10:50:23 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:21:53.228 10:50:23 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:53.228 10:50:23 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:53.228 10:50:23 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:53.228 ************************************ 00:21:53.228 END TEST spdkcli_raid 00:21:53.228 ************************************ 00:21:53.228 00:21:53.228 real 0m9.979s 00:21:53.228 user 0m21.040s 00:21:53.228 sys 0m0.938s 00:21:53.228 10:50:23 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:53.228 10:50:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:53.228 10:50:23 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:53.228 10:50:23 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:53.228 10:50:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:53.228 10:50:23 -- common/autotest_common.sh@10 -- # set +x 00:21:53.228 ************************************ 00:21:53.228 START TEST blockdev_raid5f 00:21:53.228 ************************************ 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:53.228 * Looking for test storage... 00:21:53.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.228 10:50:23 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.228 --rc genhtml_branch_coverage=1 00:21:53.228 --rc genhtml_function_coverage=1 00:21:53.228 --rc genhtml_legend=1 00:21:53.228 --rc geninfo_all_blocks=1 00:21:53.228 --rc geninfo_unexecuted_blocks=1 00:21:53.228 00:21:53.228 ' 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.228 --rc genhtml_branch_coverage=1 00:21:53.228 --rc genhtml_function_coverage=1 00:21:53.228 --rc genhtml_legend=1 00:21:53.228 --rc geninfo_all_blocks=1 00:21:53.228 --rc geninfo_unexecuted_blocks=1 00:21:53.228 00:21:53.228 ' 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.228 --rc genhtml_branch_coverage=1 00:21:53.228 --rc genhtml_function_coverage=1 00:21:53.228 --rc genhtml_legend=1 00:21:53.228 --rc geninfo_all_blocks=1 00:21:53.228 --rc geninfo_unexecuted_blocks=1 00:21:53.228 00:21:53.228 ' 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.228 --rc genhtml_branch_coverage=1 00:21:53.228 --rc genhtml_function_coverage=1 00:21:53.228 --rc genhtml_legend=1 00:21:53.228 --rc geninfo_all_blocks=1 00:21:53.228 --rc geninfo_unexecuted_blocks=1 00:21:53.228 00:21:53.228 ' 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90617 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:53.228 10:50:23 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90617 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90617 ']' 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:53.228 10:50:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:53.228 [2024-11-15 10:50:23.566716] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:21:53.228 [2024-11-15 10:50:23.567692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90617 ] 00:21:53.228 [2024-11-15 10:50:23.744115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.487 [2024-11-15 10:50:23.872888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.422 10:50:24 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:54.422 10:50:24 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:21:54.422 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:21:54.422 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:21:54.422 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:21:54.422 10:50:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.422 10:50:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:54.422 Malloc0 00:21:54.422 Malloc1 00:21:54.422 Malloc2 00:21:54.422 10:50:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.422 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:21:54.422 10:50:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.422 10:50:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:54.422 10:50:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.423 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:21:54.423 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.423 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.423 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.423 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:21:54.423 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:54.423 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:21:54.423 10:50:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.423 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:21:54.423 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:21:54.423 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "1a13b364-ec99-4657-845a-2260a82c2ed5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1a13b364-ec99-4657-845a-2260a82c2ed5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "1a13b364-ec99-4657-845a-2260a82c2ed5",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "839020a2-f698-4185-9b2a-8d7ba1020d17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "750da340-c005-4570-bf8d-fefe0f2add35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "8baca066-bdf8-4a19-a9b7-d3e02fb14375",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:54.680 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:21:54.680 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:21:54.680 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:21:54.680 10:50:24 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90617 00:21:54.681 10:50:24 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90617 ']' 00:21:54.681 10:50:24 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90617 00:21:54.681 10:50:24 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:21:54.681 10:50:24 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:54.681 10:50:25 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90617 00:21:54.681 killing process with pid 90617 00:21:54.681 10:50:25 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:54.681 10:50:25 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:54.681 10:50:25 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90617' 00:21:54.681 10:50:25 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90617 00:21:54.681 10:50:25 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90617 00:21:57.211 10:50:27 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:57.211 10:50:27 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:57.211 10:50:27 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:57.211 10:50:27 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:57.211 10:50:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:57.211 ************************************ 00:21:57.211 START TEST bdev_hello_world 00:21:57.211 ************************************ 00:21:57.211 10:50:27 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:57.212 [2024-11-15 10:50:27.464788] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:21:57.212 [2024-11-15 10:50:27.464952] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90683 ] 00:21:57.212 [2024-11-15 10:50:27.641917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.212 [2024-11-15 10:50:27.744373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.778 [2024-11-15 10:50:28.221288] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:57.778 [2024-11-15 10:50:28.221365] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:21:57.778 [2024-11-15 10:50:28.221394] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:57.778 [2024-11-15 10:50:28.221960] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:57.778 [2024-11-15 10:50:28.222138] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:57.778 [2024-11-15 10:50:28.222166] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:57.778 [2024-11-15 10:50:28.222237] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:57.778 00:21:57.778 [2024-11-15 10:50:28.222265] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:59.152 00:21:59.152 real 0m2.076s 00:21:59.152 user 0m1.740s 00:21:59.152 sys 0m0.210s 00:21:59.152 10:50:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:59.152 10:50:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:59.152 ************************************ 00:21:59.152 END TEST bdev_hello_world 00:21:59.152 ************************************ 00:21:59.152 10:50:29 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:21:59.152 10:50:29 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:59.152 10:50:29 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:59.152 10:50:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:59.152 ************************************ 00:21:59.152 START TEST bdev_bounds 00:21:59.152 ************************************ 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:21:59.152 Process bdevio pid: 90721 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90721 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90721' 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90721 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90721 ']' 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:59.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:59.152 10:50:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:59.152 [2024-11-15 10:50:29.594248] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:21:59.152 [2024-11-15 10:50:29.595094] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90721 ] 00:21:59.411 [2024-11-15 10:50:29.775524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:59.411 [2024-11-15 10:50:29.881777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.411 [2024-11-15 10:50:29.881902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.411 [2024-11-15 10:50:29.881912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.347 10:50:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:00.347 10:50:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:22:00.347 10:50:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:00.347 I/O targets: 00:22:00.347 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:22:00.347 00:22:00.347 00:22:00.347 CUnit - A unit testing framework for C - Version 2.1-3 00:22:00.347 http://cunit.sourceforge.net/ 00:22:00.347 00:22:00.347 00:22:00.347 Suite: bdevio tests on: raid5f 00:22:00.347 Test: blockdev write read block ...passed 00:22:00.347 Test: blockdev write zeroes read block ...passed 00:22:00.347 Test: blockdev write zeroes read no split ...passed 00:22:00.347 Test: blockdev write zeroes read split ...passed 00:22:00.716 Test: blockdev write zeroes read split partial ...passed 00:22:00.717 Test: blockdev reset ...passed 00:22:00.717 Test: blockdev write read 8 blocks ...passed 00:22:00.717 Test: blockdev write read size > 128k ...passed 00:22:00.717 Test: blockdev write read invalid size ...passed 00:22:00.717 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:00.717 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:00.717 Test: blockdev write read max offset ...passed 00:22:00.717 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:00.717 Test: blockdev writev readv 8 blocks ...passed 00:22:00.717 Test: blockdev writev readv 30 x 1block ...passed 00:22:00.717 Test: blockdev writev readv block ...passed 00:22:00.717 Test: blockdev writev readv size > 128k ...passed 00:22:00.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:00.717 Test: blockdev comparev and writev ...passed 00:22:00.717 Test: blockdev nvme passthru rw ...passed 00:22:00.717 Test: blockdev nvme passthru vendor specific ...passed 00:22:00.717 Test: blockdev nvme admin passthru ...passed 00:22:00.717 Test: blockdev copy ...passed 00:22:00.717 00:22:00.717 Run Summary: Type Total Ran Passed Failed Inactive 00:22:00.717 suites 1 1 n/a 0 0 00:22:00.717 tests 23 23 23 0 0 00:22:00.717 asserts 130 130 130 0 n/a 00:22:00.717 00:22:00.717 Elapsed time = 0.588 seconds 00:22:00.717 0 00:22:00.717 10:50:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90721 00:22:00.717 10:50:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90721 ']' 00:22:00.717 10:50:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90721 00:22:00.717 10:50:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:22:00.717 10:50:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:00.717 10:50:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90721 00:22:00.717 10:50:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:00.717 10:50:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:00.717 10:50:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90721' 00:22:00.717 killing process with pid 90721 00:22:00.717 10:50:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90721 00:22:00.717 10:50:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90721 00:22:02.108 10:50:32 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:02.108 00:22:02.108 real 0m2.777s 00:22:02.108 user 0m7.022s 00:22:02.108 sys 0m0.367s 00:22:02.108 10:50:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:02.108 ************************************ 00:22:02.108 10:50:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:02.108 END TEST bdev_bounds 00:22:02.108 ************************************ 00:22:02.108 10:50:32 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:02.108 10:50:32 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:02.108 10:50:32 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:02.108 10:50:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:02.108 ************************************ 00:22:02.108 START TEST bdev_nbd 00:22:02.108 ************************************ 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:22:02.108 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90785 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90785 /var/tmp/spdk-nbd.sock 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90785 ']' 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:02.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:02.109 10:50:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:02.109 [2024-11-15 10:50:32.423756] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:22:02.109 [2024-11-15 10:50:32.424086] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.109 [2024-11-15 10:50:32.607018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.367 [2024-11-15 10:50:32.714088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:02.933 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:03.500 1+0 records in 00:22:03.500 1+0 records out 00:22:03.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225099 s, 18.2 MB/s 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:03.500 10:50:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:03.500 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:03.500 { 00:22:03.500 "nbd_device": "/dev/nbd0", 00:22:03.500 "bdev_name": "raid5f" 00:22:03.500 } 00:22:03.500 ]' 00:22:03.500 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:03.500 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:03.500 { 00:22:03.500 "nbd_device": "/dev/nbd0", 00:22:03.500 "bdev_name": "raid5f" 00:22:03.500 } 00:22:03.500 ]' 00:22:03.500 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:03.758 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:03.758 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:03.758 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:03.758 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:03.758 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:03.758 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:03.758 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:04.017 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:04.275 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:04.276 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:04.276 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:04.276 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:04.276 10:50:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:22:04.534 /dev/nbd0 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.534 1+0 records in 00:22:04.534 1+0 records out 00:22:04.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396447 s, 10.3 MB/s 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:04.534 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:04.792 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:05.051 { 00:22:05.051 "nbd_device": "/dev/nbd0", 00:22:05.051 "bdev_name": "raid5f" 00:22:05.051 } 00:22:05.051 ]' 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:05.051 { 00:22:05.051 "nbd_device": "/dev/nbd0", 00:22:05.051 "bdev_name": "raid5f" 00:22:05.051 } 00:22:05.051 ]' 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:05.051 256+0 records in 00:22:05.051 256+0 records out 00:22:05.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00737165 s, 142 MB/s 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:05.051 256+0 records in 00:22:05.051 256+0 records out 00:22:05.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.037396 s, 28.0 MB/s 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.051 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:05.311 10:50:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:05.879 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:06.138 malloc_lvol_verify 00:22:06.138 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:06.397 48b971a9-1129-44fb-b662-4571ac2d9ac7 00:22:06.397 10:50:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:06.656 a0fb0a8a-d410-4107-8b6e-d0f5adb8f002 00:22:06.656 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:06.915 /dev/nbd0 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:06.915 mke2fs 1.47.0 (5-Feb-2023) 00:22:06.915 Discarding device blocks: 0/4096 done 00:22:06.915 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:06.915 00:22:06.915 Allocating group tables: 0/1 done 00:22:06.915 Writing inode tables: 0/1 done 00:22:06.915 Creating journal (1024 blocks): done 00:22:06.915 Writing superblocks and filesystem accounting information: 0/1 done 00:22:06.915 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:06.915 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:07.173 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:07.173 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:07.173 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:07.174 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:07.174 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:07.174 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90785 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90785 ']' 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90785 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90785 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:07.438 killing process with pid 90785 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90785' 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90785 00:22:07.438 10:50:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90785 00:22:08.828 10:50:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:08.828 00:22:08.828 real 0m6.752s 00:22:08.828 user 0m10.040s 00:22:08.828 sys 0m1.236s 00:22:08.828 10:50:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:08.828 ************************************ 00:22:08.828 END TEST bdev_nbd 00:22:08.828 ************************************ 00:22:08.828 10:50:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:08.828 10:50:39 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:22:08.828 10:50:39 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:22:08.828 10:50:39 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:22:08.828 10:50:39 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:22:08.828 10:50:39 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:08.828 10:50:39 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:08.828 10:50:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:08.828 ************************************ 00:22:08.828 START TEST bdev_fio 00:22:08.828 ************************************ 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:08.828 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:08.828 ************************************ 00:22:08.828 START TEST bdev_fio_rw_verify 00:22:08.828 ************************************ 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:08.828 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:08.829 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:08.829 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:22:08.829 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:08.829 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:08.829 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:08.829 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:22:08.829 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:08.829 10:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:09.087 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:09.087 fio-3.35 00:22:09.087 Starting 1 thread 00:22:21.293 00:22:21.293 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90997: Fri Nov 15 10:50:50 2024 00:22:21.293 read: IOPS=8243, BW=32.2MiB/s (33.8MB/s)(322MiB/10001msec) 00:22:21.293 slat (usec): min=23, max=144, avg=30.02, stdev= 5.14 00:22:21.293 clat (usec): min=13, max=504, avg=194.36, stdev=71.96 00:22:21.293 lat (usec): min=41, max=562, avg=224.38, stdev=72.72 00:22:21.293 clat percentiles (usec): 00:22:21.293 | 50.000th=[ 198], 99.000th=[ 343], 99.900th=[ 396], 99.990th=[ 445], 00:22:21.293 | 99.999th=[ 506] 00:22:21.293 write: IOPS=8702, BW=34.0MiB/s (35.6MB/s)(335MiB/9866msec); 0 zone resets 00:22:21.293 slat (usec): min=11, max=223, avg=24.46, stdev= 6.02 00:22:21.293 clat (usec): min=79, max=935, avg=436.11, stdev=58.35 00:22:21.293 lat (usec): min=101, max=1081, avg=460.57, stdev=59.78 00:22:21.293 clat percentiles (usec): 00:22:21.293 | 50.000th=[ 441], 99.000th=[ 586], 99.900th=[ 668], 99.990th=[ 840], 00:22:21.293 | 99.999th=[ 938] 00:22:21.293 bw ( KiB/s): min=32016, max=36352, per=98.36%, avg=34238.32, stdev=1098.65, samples=19 00:22:21.293 iops : min= 8004, max= 9088, avg=8559.58, stdev=274.66, samples=19 00:22:21.293 lat (usec) : 20=0.01%, 100=5.56%, 250=29.97%, 500=59.22%, 750=5.23% 00:22:21.293 lat (usec) : 1000=0.02% 00:22:21.293 cpu : usr=98.73%, sys=0.39%, ctx=36, majf=0, minf=7232 00:22:21.293 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:21.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.293 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.293 issued rwts: total=82439,85856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.293 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:21.293 00:22:21.293 Run status group 0 (all jobs): 00:22:21.293 READ: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=322MiB (338MB), run=10001-10001msec 00:22:21.293 WRITE: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=335MiB (352MB), run=9866-9866msec 00:22:21.552 ----------------------------------------------------- 00:22:21.552 Suppressions used: 00:22:21.552 count bytes template 00:22:21.552 1 7 /usr/src/fio/parse.c 00:22:21.552 850 81600 /usr/src/fio/iolog.c 00:22:21.552 1 8 libtcmalloc_minimal.so 00:22:21.552 1 904 libcrypto.so 00:22:21.552 ----------------------------------------------------- 00:22:21.552 00:22:21.552 00:22:21.552 real 0m12.839s 00:22:21.552 user 0m13.206s 00:22:21.552 sys 0m0.802s 00:22:21.552 10:50:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:21.552 10:50:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:21.552 ************************************ 00:22:21.552 END TEST bdev_fio_rw_verify 00:22:21.552 ************************************ 00:22:21.552 10:50:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "1a13b364-ec99-4657-845a-2260a82c2ed5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1a13b364-ec99-4657-845a-2260a82c2ed5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "1a13b364-ec99-4657-845a-2260a82c2ed5",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "839020a2-f698-4185-9b2a-8d7ba1020d17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "750da340-c005-4570-bf8d-fefe0f2add35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "8baca066-bdf8-4a19-a9b7-d3e02fb14375",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:21.811 /home/vagrant/spdk_repo/spdk 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:21.811 00:22:21.811 real 0m13.056s 00:22:21.811 user 0m13.309s 00:22:21.811 sys 0m0.886s 00:22:21.811 ************************************ 00:22:21.811 END TEST bdev_fio 00:22:21.811 ************************************ 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:21.811 10:50:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:21.811 10:50:52 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:21.811 10:50:52 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:21.811 10:50:52 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:22:21.811 10:50:52 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:21.811 10:50:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:21.811 ************************************ 00:22:21.811 START TEST bdev_verify 00:22:21.811 ************************************ 00:22:21.811 10:50:52 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:21.811 [2024-11-15 10:50:52.349531] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:22:21.811 [2024-11-15 10:50:52.349706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91156 ] 00:22:22.070 [2024-11-15 10:50:52.534958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:22.328 [2024-11-15 10:50:52.687635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.328 [2024-11-15 10:50:52.687635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.895 Running I/O for 5 seconds... 00:22:24.845 12305.00 IOPS, 48.07 MiB/s [2024-11-15T10:50:56.356Z] 11689.50 IOPS, 45.66 MiB/s [2024-11-15T10:50:57.292Z] 12235.00 IOPS, 47.79 MiB/s [2024-11-15T10:50:58.668Z] 12737.75 IOPS, 49.76 MiB/s [2024-11-15T10:50:58.668Z] 12789.20 IOPS, 49.96 MiB/s 00:22:28.108 Latency(us) 00:22:28.108 [2024-11-15T10:50:58.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.108 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:28.108 Verification LBA range: start 0x0 length 0x2000 00:22:28.108 raid5f : 5.03 6409.83 25.04 0.00 0.00 29954.31 121.95 25618.62 00:22:28.108 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:28.108 Verification LBA range: start 0x2000 length 0x2000 00:22:28.108 raid5f : 5.02 6367.54 24.87 0.00 0.00 30127.55 417.05 25856.93 00:22:28.108 [2024-11-15T10:50:58.668Z] =================================================================================================================== 00:22:28.108 [2024-11-15T10:50:58.668Z] Total : 12777.36 49.91 0.00 0.00 30040.64 121.95 25856.93 00:22:29.044 00:22:29.044 real 0m7.260s 00:22:29.044 user 0m13.319s 00:22:29.044 sys 0m0.269s 00:22:29.044 10:50:59 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:29.044 10:50:59 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:29.044 ************************************ 00:22:29.044 END TEST bdev_verify 00:22:29.044 ************************************ 00:22:29.044 10:50:59 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:29.044 10:50:59 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:22:29.044 10:50:59 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:29.044 10:50:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:29.044 ************************************ 00:22:29.044 START TEST bdev_verify_big_io 00:22:29.044 ************************************ 00:22:29.044 10:50:59 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:29.303 [2024-11-15 10:50:59.668035] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:22:29.303 [2024-11-15 10:50:59.668197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91257 ] 00:22:29.303 [2024-11-15 10:50:59.852725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:29.563 [2024-11-15 10:50:59.984660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.563 [2024-11-15 10:50:59.984669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.130 Running I/O for 5 seconds... 00:22:32.442 568.00 IOPS, 35.50 MiB/s [2024-11-15T10:51:03.938Z] 728.50 IOPS, 45.53 MiB/s [2024-11-15T10:51:04.872Z] 761.33 IOPS, 47.58 MiB/s [2024-11-15T10:51:05.808Z] 777.00 IOPS, 48.56 MiB/s [2024-11-15T10:51:06.066Z] 812.00 IOPS, 50.75 MiB/s 00:22:35.506 Latency(us) 00:22:35.506 [2024-11-15T10:51:06.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.506 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:35.506 Verification LBA range: start 0x0 length 0x200 00:22:35.506 raid5f : 5.31 406.03 25.38 0.00 0.00 7767759.22 195.49 337450.82 00:22:35.506 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:35.506 Verification LBA range: start 0x200 length 0x200 00:22:35.506 raid5f : 5.24 412.04 25.75 0.00 0.00 7601930.67 197.35 341263.83 00:22:35.506 [2024-11-15T10:51:06.066Z] =================================================================================================================== 00:22:35.506 [2024-11-15T10:51:06.066Z] Total : 818.07 51.13 0.00 0.00 7684806.52 195.49 341263.83 00:22:36.892 00:22:36.892 real 0m7.540s 00:22:36.892 user 0m13.920s 00:22:36.892 sys 0m0.241s 00:22:36.892 10:51:07 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:36.892 10:51:07 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:36.892 ************************************ 00:22:36.892 END TEST bdev_verify_big_io 00:22:36.892 ************************************ 00:22:36.892 10:51:07 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:36.892 10:51:07 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:22:36.892 10:51:07 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:36.892 10:51:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:36.892 ************************************ 00:22:36.892 START TEST bdev_write_zeroes 00:22:36.892 ************************************ 00:22:36.892 10:51:07 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:36.892 [2024-11-15 10:51:07.218885] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:22:36.892 [2024-11-15 10:51:07.219058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91352 ] 00:22:36.892 [2024-11-15 10:51:07.395863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.152 [2024-11-15 10:51:07.523493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.720 Running I/O for 1 seconds... 00:22:38.655 19575.00 IOPS, 76.46 MiB/s 00:22:38.655 Latency(us) 00:22:38.655 [2024-11-15T10:51:09.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.655 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:38.655 raid5f : 1.01 19543.33 76.34 0.00 0.00 6523.71 1921.40 8698.41 00:22:38.655 [2024-11-15T10:51:09.215Z] =================================================================================================================== 00:22:38.655 [2024-11-15T10:51:09.216Z] Total : 19543.33 76.34 0.00 0.00 6523.71 1921.40 8698.41 00:22:40.032 00:22:40.032 real 0m3.179s 00:22:40.032 user 0m2.814s 00:22:40.032 sys 0m0.231s 00:22:40.032 10:51:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:40.032 ************************************ 00:22:40.032 END TEST bdev_write_zeroes 00:22:40.032 ************************************ 00:22:40.032 10:51:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:40.032 10:51:10 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:40.032 10:51:10 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:22:40.032 10:51:10 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:40.032 10:51:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:40.032 ************************************ 00:22:40.032 START TEST bdev_json_nonenclosed 00:22:40.032 ************************************ 00:22:40.032 10:51:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:40.032 [2024-11-15 10:51:10.450276] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:22:40.032 [2024-11-15 10:51:10.450492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91401 ] 00:22:40.291 [2024-11-15 10:51:10.637622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.291 [2024-11-15 10:51:10.741091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.291 [2024-11-15 10:51:10.741189] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:40.291 [2024-11-15 10:51:10.741228] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:40.291 [2024-11-15 10:51:10.741243] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:40.550 00:22:40.550 real 0m0.654s 00:22:40.550 user 0m0.427s 00:22:40.550 sys 0m0.121s 00:22:40.550 ************************************ 00:22:40.550 END TEST bdev_json_nonenclosed 00:22:40.550 ************************************ 00:22:40.550 10:51:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:40.550 10:51:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:40.550 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:40.550 10:51:11 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:22:40.550 10:51:11 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:40.550 10:51:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:40.550 ************************************ 00:22:40.550 START TEST bdev_json_nonarray 00:22:40.550 ************************************ 00:22:40.550 10:51:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:40.807 [2024-11-15 10:51:11.132046] Starting SPDK v25.01-pre git sha1 59da1a1d7 / DPDK 24.03.0 initialization... 00:22:40.807 [2024-11-15 10:51:11.132216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91432 ] 00:22:40.807 [2024-11-15 10:51:11.309646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.065 [2024-11-15 10:51:11.436410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.066 [2024-11-15 10:51:11.436551] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:41.066 [2024-11-15 10:51:11.436585] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:41.066 [2024-11-15 10:51:11.436618] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:41.324 00:22:41.324 real 0m0.670s 00:22:41.324 user 0m0.450s 00:22:41.324 sys 0m0.114s 00:22:41.324 10:51:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:41.324 10:51:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:41.324 ************************************ 00:22:41.324 END TEST bdev_json_nonarray 00:22:41.324 ************************************ 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:22:41.324 10:51:11 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:22:41.324 00:22:41.324 real 0m48.481s 00:22:41.324 user 1m7.330s 00:22:41.324 sys 0m4.484s 00:22:41.324 10:51:11 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:41.324 ************************************ 00:22:41.324 END TEST blockdev_raid5f 00:22:41.324 ************************************ 00:22:41.324 10:51:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:41.324 10:51:11 -- spdk/autotest.sh@194 -- # uname -s 00:22:41.324 10:51:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:41.324 10:51:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:41.324 10:51:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:41.324 10:51:11 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:41.324 10:51:11 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:22:41.324 10:51:11 -- spdk/autotest.sh@256 -- # timing_exit lib 00:22:41.324 10:51:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:41.324 10:51:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.324 10:51:11 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:22:41.324 10:51:11 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:22:41.324 10:51:11 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:41.325 10:51:11 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:41.325 10:51:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:41.325 10:51:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:41.325 10:51:11 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:41.325 10:51:11 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:41.325 10:51:11 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:41.325 10:51:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.325 10:51:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.325 10:51:11 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:41.325 10:51:11 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:22:41.325 10:51:11 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:22:41.325 10:51:11 -- common/autotest_common.sh@10 -- # set +x 00:22:43.227 INFO: APP EXITING 00:22:43.227 INFO: killing all VMs 00:22:43.227 INFO: killing vhost app 00:22:43.227 INFO: EXIT DONE 00:22:43.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:43.227 Waiting for block devices as requested 00:22:43.227 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:43.227 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:44.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:44.163 Cleaning 00:22:44.163 Removing: /var/run/dpdk/spdk0/config 00:22:44.163 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:44.163 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:44.163 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:44.163 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:44.163 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:44.163 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:44.163 Removing: /dev/shm/spdk_tgt_trace.pid57155 00:22:44.163 Removing: /var/run/dpdk/spdk0 00:22:44.163 Removing: /var/run/dpdk/spdk_pid56931 00:22:44.163 Removing: /var/run/dpdk/spdk_pid57155 00:22:44.163 Removing: /var/run/dpdk/spdk_pid57379 00:22:44.163 Removing: /var/run/dpdk/spdk_pid57486 00:22:44.163 Removing: /var/run/dpdk/spdk_pid57539 00:22:44.163 Removing: /var/run/dpdk/spdk_pid57667 00:22:44.163 Removing: /var/run/dpdk/spdk_pid57685 00:22:44.163 Removing: /var/run/dpdk/spdk_pid57895 00:22:44.163 Removing: /var/run/dpdk/spdk_pid58000 00:22:44.163 Removing: /var/run/dpdk/spdk_pid58106 00:22:44.163 Removing: /var/run/dpdk/spdk_pid58218 00:22:44.163 Removing: /var/run/dpdk/spdk_pid58326 00:22:44.163 Removing: /var/run/dpdk/spdk_pid58366 00:22:44.163 Removing: /var/run/dpdk/spdk_pid58402 00:22:44.163 Removing: /var/run/dpdk/spdk_pid58478 00:22:44.163 Removing: /var/run/dpdk/spdk_pid58592 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59066 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59141 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59215 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59231 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59366 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59382 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59523 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59539 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59603 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59621 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59685 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59714 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59896 00:22:44.163 Removing: /var/run/dpdk/spdk_pid59933 00:22:44.163 Removing: /var/run/dpdk/spdk_pid60016 00:22:44.163 Removing: /var/run/dpdk/spdk_pid61375 00:22:44.163 Removing: /var/run/dpdk/spdk_pid61593 00:22:44.163 Removing: /var/run/dpdk/spdk_pid61733 00:22:44.163 Removing: /var/run/dpdk/spdk_pid62386 00:22:44.163 Removing: /var/run/dpdk/spdk_pid62599 00:22:44.163 Removing: /var/run/dpdk/spdk_pid62747 00:22:44.163 Removing: /var/run/dpdk/spdk_pid63401 00:22:44.163 Removing: /var/run/dpdk/spdk_pid63731 00:22:44.163 Removing: /var/run/dpdk/spdk_pid63878 00:22:44.163 Removing: /var/run/dpdk/spdk_pid65285 00:22:44.163 Removing: /var/run/dpdk/spdk_pid65548 00:22:44.163 Removing: /var/run/dpdk/spdk_pid65689 00:22:44.163 Removing: /var/run/dpdk/spdk_pid67097 00:22:44.163 Removing: /var/run/dpdk/spdk_pid67351 00:22:44.163 Removing: /var/run/dpdk/spdk_pid67501 00:22:44.163 Removing: /var/run/dpdk/spdk_pid68904 00:22:44.163 Removing: /var/run/dpdk/spdk_pid69361 00:22:44.163 Removing: /var/run/dpdk/spdk_pid69501 00:22:44.163 Removing: /var/run/dpdk/spdk_pid71015 00:22:44.163 Removing: /var/run/dpdk/spdk_pid71280 00:22:44.163 Removing: /var/run/dpdk/spdk_pid71426 00:22:44.163 Removing: /var/run/dpdk/spdk_pid72935 00:22:44.163 Removing: /var/run/dpdk/spdk_pid73205 00:22:44.163 Removing: /var/run/dpdk/spdk_pid73352 00:22:44.163 Removing: /var/run/dpdk/spdk_pid74853 00:22:44.163 Removing: /var/run/dpdk/spdk_pid75347 00:22:44.163 Removing: /var/run/dpdk/spdk_pid75493 00:22:44.163 Removing: /var/run/dpdk/spdk_pid75631 00:22:44.163 Removing: /var/run/dpdk/spdk_pid76084 00:22:44.163 Removing: /var/run/dpdk/spdk_pid76851 00:22:44.163 Removing: /var/run/dpdk/spdk_pid77231 00:22:44.163 Removing: /var/run/dpdk/spdk_pid77927 00:22:44.163 Removing: /var/run/dpdk/spdk_pid78416 00:22:44.163 Removing: /var/run/dpdk/spdk_pid79209 00:22:44.163 Removing: /var/run/dpdk/spdk_pid79630 00:22:44.163 Removing: /var/run/dpdk/spdk_pid81653 00:22:44.163 Removing: /var/run/dpdk/spdk_pid82099 00:22:44.163 Removing: /var/run/dpdk/spdk_pid82545 00:22:44.163 Removing: /var/run/dpdk/spdk_pid84676 00:22:44.163 Removing: /var/run/dpdk/spdk_pid85166 00:22:44.163 Removing: /var/run/dpdk/spdk_pid85681 00:22:44.163 Removing: /var/run/dpdk/spdk_pid86764 00:22:44.163 Removing: /var/run/dpdk/spdk_pid87097 00:22:44.163 Removing: /var/run/dpdk/spdk_pid88045 00:22:44.163 Removing: /var/run/dpdk/spdk_pid88375 00:22:44.163 Removing: /var/run/dpdk/spdk_pid89341 00:22:44.163 Removing: /var/run/dpdk/spdk_pid89665 00:22:44.163 Removing: /var/run/dpdk/spdk_pid90342 00:22:44.163 Removing: /var/run/dpdk/spdk_pid90617 00:22:44.163 Removing: /var/run/dpdk/spdk_pid90683 00:22:44.163 Removing: /var/run/dpdk/spdk_pid90721 00:22:44.163 Removing: /var/run/dpdk/spdk_pid90982 00:22:44.163 Removing: /var/run/dpdk/spdk_pid91156 00:22:44.163 Removing: /var/run/dpdk/spdk_pid91257 00:22:44.163 Removing: /var/run/dpdk/spdk_pid91352 00:22:44.163 Removing: /var/run/dpdk/spdk_pid91401 00:22:44.163 Removing: /var/run/dpdk/spdk_pid91432 00:22:44.163 Clean 00:22:44.421 10:51:14 -- common/autotest_common.sh@1451 -- # return 0 00:22:44.421 10:51:14 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:22:44.421 10:51:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.421 10:51:14 -- common/autotest_common.sh@10 -- # set +x 00:22:44.421 10:51:14 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:22:44.421 10:51:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.421 10:51:14 -- common/autotest_common.sh@10 -- # set +x 00:22:44.421 10:51:14 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:44.421 10:51:14 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:44.422 10:51:14 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:44.422 10:51:14 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:22:44.422 10:51:14 -- spdk/autotest.sh@394 -- # hostname 00:22:44.422 10:51:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:44.680 geninfo: WARNING: invalid characters removed from testname! 00:23:16.750 10:51:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:16.750 10:51:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:18.653 10:51:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:21.185 10:51:51 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:24.470 10:51:54 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:27.001 10:51:57 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:29.580 10:52:00 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:29.580 10:52:00 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:29.580 10:52:00 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:29.580 10:52:00 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:29.580 10:52:00 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:29.580 10:52:00 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:29.580 + [[ -n 5255 ]] 00:23:29.580 + sudo kill 5255 00:23:29.849 [Pipeline] } 00:23:29.867 [Pipeline] // timeout 00:23:29.873 [Pipeline] } 00:23:29.887 [Pipeline] // stage 00:23:29.894 [Pipeline] } 00:23:29.909 [Pipeline] // catchError 00:23:29.918 [Pipeline] stage 00:23:29.921 [Pipeline] { (Stop VM) 00:23:29.935 [Pipeline] sh 00:23:30.216 + vagrant halt 00:23:34.411 ==> default: Halting domain... 00:23:39.693 [Pipeline] sh 00:23:39.970 + vagrant destroy -f 00:23:44.181 ==> default: Removing domain... 00:23:44.193 [Pipeline] sh 00:23:44.471 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:23:44.479 [Pipeline] } 00:23:44.493 [Pipeline] // stage 00:23:44.499 [Pipeline] } 00:23:44.513 [Pipeline] // dir 00:23:44.519 [Pipeline] } 00:23:44.532 [Pipeline] // wrap 00:23:44.539 [Pipeline] } 00:23:44.551 [Pipeline] // catchError 00:23:44.560 [Pipeline] stage 00:23:44.563 [Pipeline] { (Epilogue) 00:23:44.575 [Pipeline] sh 00:23:44.856 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:51.439 [Pipeline] catchError 00:23:51.441 [Pipeline] { 00:23:51.455 [Pipeline] sh 00:23:51.736 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:51.994 Artifacts sizes are good 00:23:52.003 [Pipeline] } 00:23:52.018 [Pipeline] // catchError 00:23:52.030 [Pipeline] archiveArtifacts 00:23:52.037 Archiving artifacts 00:23:52.139 [Pipeline] cleanWs 00:23:52.150 [WS-CLEANUP] Deleting project workspace... 00:23:52.150 [WS-CLEANUP] Deferred wipeout is used... 00:23:52.156 [WS-CLEANUP] done 00:23:52.158 [Pipeline] } 00:23:52.174 [Pipeline] // stage 00:23:52.179 [Pipeline] } 00:23:52.195 [Pipeline] // node 00:23:52.201 [Pipeline] End of Pipeline 00:23:52.235 Finished: SUCCESS